Audio deepfakes use artificial intelligence to create recordings that mimic the sound, tones and inflections of specific people. They, like video deepfakes, have become powerful tools for scammers and political disruptors.
PAGE 14 & 15
MESSAGE FROM THE CHAIR
Dear CISE community,
I am honored to serve as the interim chair of the Department of Computer & Information Science & Engineering during this exciting time of innovation and growth. First, I extend my sincere gratitude to former chair and Distinguished Professor Juan E. Gilbert for his leadership and unwavering commitment to the department. His vision strengthened our position as a global leader in research, education and service.
Looking ahead, I am inspired by the remarkable achievements of our faculty, students and alumni. Recent research breakthroughs underscore our department’s critical role in advancing technology that impacts society.
In this issue, we showcase our pioneering work in deepfake audio detection by collaborators Kevin Butler, Ph.D., and Susan Nittroue, Ph.D. This research holds promise for the future of cybersecurity and helps hearing-impaired individuals in identifying fraud.
Further, a UF team led by Gilbert — with a paper authored by DeKita Rembert, Ph.D. — earned an Outstanding Paper Award for “InterestMe Math,” a project that personalizes math word problems by using students’ career interests to reduce equity gaps in learning outcomes.
Finally, you will read about how UF’s Sara Rampazzi, Ph.D., and other international researchers revealed a critical privacy vulnerability in digital microphones that unintentionally emit intelligible radio signals, potentially allowing eavesdropping through walls.
We also celebrate outstanding faculty excellence.
Congratulations to Prabhat Mishra, Ph.D., on receiving a prestigious University-level mentoring award. He also was honored with Inventions of the Year awards by UF Innovate Tech Licensing for his groundbreaking work in quantum computing and cybersecurity.
In this issue we also celebrate significant milestones and transitions.
We bid farewell to Professor Joseph N. Wilson, Ph.D., who retired after an incredible 41 years of service. His contributions to teaching, research and mentorship left an enduring legacy that will continue to inspire future generations.
We’re excited to welcome five new faculty members: Yuanyuan Lei, Ph.D., Marvin Andujar, Ph.D., Sumit Kumar Jha, Ph.D., Jingwei Sun, Ph.D., and Hoang-Dung Tran, Ph.D.
Also featured is the UF Student Information Security Team, which earned second place at the 2025 Global Collegiate Penetration Testing Competition, and took top honors in the “Cloud 9” and “Best Slide” categories.
As we celebrate these remarkable milestones and achievements, I invite you to join me in congratulating our faculty members, students and alumni. Your support is essential to our department’s continued growth, success and global leadership in Computer & Information Science. I look forward to increased interaction with our alumni, innovation in computing, and impact far beyond the walls of Malachowsky Hall.
Sincerely,
Patrick Traynor,
CISE, Interim Dept. Chair.
Warren Dixon, Ph.D.
INTERIM DEAN, HERBERT WERTHEIM COLLEGE OF ENGINEERING
Patrick Traynor, Ph.D.
PROFESSOR & INTERIM DEPARTMENT CHAIR & CISE ASSOCIATE CHAIR FOR RESEARCH
Tamer Kahveci, Ph.D.
CISE ASSOCIATE CHAIR OF ACADEMIC AFFAIRS
Drew Brown, M.S.
GRAPHIC DESIGNER
MAGAZINE EDITOR
Allison Logan, M.A.
ASISSTANT DIRECTOR OF CREATIVE SERVICES
MAGAZINE EDITOR
FACULTY AWARDS FROM EXPLORING TO MENTORING
DORLEY ELECETED TO FULBRIGHT ASSOCIATION NATIONAL BOARD
Emmanuel Dorley, Ph.D., has been interested in how technology works since he was 10 years old. His mother had purchased him a Gateway PC, and his interest in computers and gadgets only grew from there.
“I spent countless hours trying to understand how [the computer] worked,” said Dorley, an assistant professor. “She would also get me cool gadgets like a walkie-talkie watch, calculator watch, and other devices from Fingerhut, which sparked my interest in technology and computing.”
The event that truly solidified his path to computer science was entering a pitch competition in high school, where he came up with an idea similar to the Apple Wallet. Lacking the skills necessary to bring it to life, he decided to pursue a degree in computer engineering so he could learn to develop the hardware and software for his ideas.
While attending North Carolina Agricultural and Technical State University, Dorley was awarded the first Fulbright Award in the school’s history. He continued his relationship with the Fulbright Association by becoming an ambassador and was recently named one of seven newly elected members of the National Board of the Fulbright Association.
“I’m excited to continue serving the Fulbright community and to give back to an organization that has profoundly influenced my development, my work and my worldview,” Dorley said.
As a Fulbright ambassador, Dorley has strengthened connections with historically Black colleges and universities through mentoring, presentations and providing workshops. Dorley was a speaker at Fulbright’s 2019 rebranding event in Washington, D.C.
“I’m incredibly proud of helping others at A&T receive a Fulbright after my initial experience and of inspiring students who didn’t think they had a shot at it. It’s a joy to receive an
As he continues his work with the Fulbright Association, he plans to encourage more alumni to get involved and stay connected with their fellow members.
“For those who haven’t yet participated in Fulbright, I strongly encourage them to apply. Fulbright is an elite program, but it’s not elitist — there’s a place for many great people who may not initially see themselves as ‘Fulbright material.” Dorley said.
“For past awardees, I’d encourage them to engage with the Association, as we need their insights and experience to help strengthen the program and the Association.”
By Drew Brown
email from a student who initially doubted themselves, only to apply and win a Fulbright,” Dorley said.
EMMANUEL DORLEY, PH.D.
MISHRA RECEIVES MENTORSHIP AWARDS
Prabhat Mishra, Ph.D., was selected as a 2024-2025 HWCOE Doctoral Dissertation Advisor/Mentoring Awardee and a 2024-2025 University of Florida Graduate School Faculty Doctoral Mentoring Awardee. Nominations for this award come from current UF graduate students, faculty members, administrators, and alumni.
“Dr. Mishra has always cared for his students and helped us in many directions. He guides us to solve challenging problems and teaches us how to explore new directions as standalone researchers,” said an unnamed former student in their nomination submission. “Dr. Mishra’s commitment, work ethic, and success have set a high standard in his group, and he always encourages his students to work hard, and achieve more.”
Mishra explained that his mentorship philosophy can be summed up by a statement from his high school teacher: a great mentor brings a student to the threshold of knowledge and ignites the interest to cross the threshold. Mishra has mentored 13 Ph.D. students who are successful in their careers, and he is currently advising six more students.
PRABHAT MISHRA, PH.D.
“I am grateful to UF for giving me the opportunity to mentor bright and talented Ph.D. students. It has been a rewarding experience in the last 20 years,” Mishra said. “I am very proud of the achievements of the 13 Ph.D. students who have graduated from my research group and are very successful in their careers.”
By Drew Brown
KAHVECI APPOINTED EDITOR-IN-CHIEF OF
IEEETCB
TAMER KAHVECI, PH.D.
Tamer Kahveci, Ph.D., was recently appointed as editor -n-chief of IEEE Transactions on Computational Biology and Bioinformatics (TCBB), a leading bimonthly journal published jointly by the IEEE Computer Society, IEEE Computational Intelligence Society, and the IEEE Engineering in Medicine and Biology Society.
“I am deeply honored to serve as the editor-in-chief of the IEEE Transactions on Computational Biology and Bioinformatics and am excited to advance cutting edge research in this rapidly evolving, interdisciplinary domain,” Kahveci said.
Kahveci was appointed by the TCBB Steering Committee based on the recommendation of the committee and with the consent of the IEEE Computer Society president and the chairs of the ACM Publications Board. His term will end in December 2027 with eligibility for reappointment for two additional years. During his term, Kahveci will also serve as a voting member of the Computer Society Transactions Operations Committee.
By Drew Brown
New Faculty
Yuanyuan Lei, Ph.D., is an assistant professor that joined the CISE department this Fall. Her research focuses on Natural Language Processing (NLP), Large Language Models (LLMs), Machine Learning, and Artificial Intelligence. She specializes in Knowledge-aware Language Modeling, enhancing LLMs with diverse forms of knowledge to improve textbased narrative understanding and discourse analysis. Her work aims to develop innovative algorithms and systems that advance LLMs’ capabilities in knowledge understanding and knowledge-intensive reasoning. She is also interested in NLP/LLMs + X research, integrating NLP models and LLMs into various domains such as science, engineering, law, healthcare etc.
Before joining UF, Yuanyuan earned her Ph.D. in Computer Science from Texas A&M University. She holds a master’s degree in Statistics from Columbia University and dual bachelor’s degrees in Mathematics and Computer Science from the University of Science and Technology of China.
Marvin Andujar, Ph.D., an associate instructional professor, joined the department in Fall 2025. Andujar’s research focuses on using Brain-Computer Interface (BCI) and AI methods to improve the attention retention of students with ADHD. His teaching areas have been Human-Computer Interaction (HCI), Computer Organization, and BCI.
Prior to joining CSE, Andujar was an assistant professor at the University of South Florida, where he taught courses in Computer Architecture and BCI. He also researched adapting P300 methods to create abstract paintings with brain activity to enhance attention levels. He has also worked on Brain-Controlled Drones for drone’s movement controls and to improve the user’s cognitive and affective states.
Andujar holds a Ph.D. in Human-Centered Computing from the University of Florida and bachelor’s degrees in Computer Science and Mathematics from Kean University. He aims to continue expanding his teaching in neural interfaces at the national level and to advance the fields of BCI and HCI.
Sumit Kumar, Ph.D., joins the University of Florida as a professor in the Department of Computer and Information Science and Engineering. He earned his Ph.D. and master's in Computer Science from Carnegie Mellon University and a B.Tech. (Honors) in Computer Science and Engineering from the Indian Institute of Technology Kharagpur. His research on high-assurance, safe, explainable, and efficient AI appears in highly selective conferences such as AAAI, DAC, ICCAD, ICLR, ICML, IJCAI, and NeurIPS. His research has acknowledged support from the Air Force Office of Scientific Research (AFOSR), the Air Force Research Laboratory, the Defense Advanced Research Projects Agency, the Department of Energy, the Florida Center for Cybersecurity, the National Nuclear Security Administration, the National Science Foundation, Oak Ridge National Laboratory, and the Office of Naval Research. Dr. Jha has received the AFOSR Young Investigator Program Award, multiple best-paper awards, and several best-paper nominations.
Jingwei Sun, Ph.D., is an assistant professor in the Department of Computer and Information Science and Engineering. He earned his Ph.D. in Electrical and Computer Engineering from Duke University. Jingwei’s research centers on efficient and trustworthy edge intelligent systems, focusing on on-device adaptation, collaborative intelligence, and robust artificial intelligence at the edge. A highlight is the first efficient finetuning of a 7B-parameter Llama-2 model on smartphones with user data, enabling large-scale, private, and personalized AI on mobile devices.
Sun’s publications have appeared in NeurIPS, ICML, CVPR, and ICLR, as well as leading conferences including MobiCom, SenSys, and MLSys. He received the Best Paper Award at the AAAI Spring Series Symposium in 2024. He also brings extensive industry collaboration experience, having led projects with world-leading companies such as NVIDIA, Lenovo, and Accenture. His works have been integrated into NVIDIA’s federated learning platform NVFlare.
Hoang-Dung Tran, Ph.D., joins CISE as an assistant professor. He earned his doctorate in computer science from Vanderbilt University in 2020.
His research focuses on the verification, validation and robust design of autonomous cyber-physical systems with learning-enabled components. His areas of expertise include reachability analysis and the robustness certification of deep neural networks (DNNs), formal verification of autonomous systems, safe and robust DNN training and real-time verification and motion planning for distributed CPS. He is also interested in robust control, stability analysis of nonlinear control systems and networked control systems.
Before joining UF, he was an assistant professor in the School of Computing at the University of Nebraska–Lincoln, where he co-directed the NIMBUS Lab and led the Verification and Validation for Assured Autonomy Group.
UN RATIFIES CASH APP PROTECTION
As mobile-money services were growing at a rapid clip in the developing world 10 years ago, University of Florida computer scientists and cybersecurity experts Kevin Butler, Ph.D., and Patrick Traynor, Ph.D., were early sentinels, raising concerns about the lack of security that could lead to real problems for the user.
In a 2014 study, the two professors uncovered security vulnerabilities of mobile cash apps, especially in the Global South, where such technologies were becoming essential in the absence of robust banking systems.
Fast forward to today, Butler and colleagues developed a comprehensive framework for securing mobile money applications that earlier this year was ratified and endorsed by the International Telecommunications Union, a United
Nations specialized agency, marking a significant step toward safer digital financial transactions worldwide. All United Nations member states voted to endorse these recommendations.
The framework includes 120 detailed recommendations and controls designed to systematically secure every facet of the financial ecosystem, ensuring comprehensive protection for users and transactions. While the recommendations are non-binding, they are widely followed by telecom providers due to their high quality and the interoperability they provide among networks.
By Karen Dooley
41 years
JOSEPH N. WILSON, PH.D., RETIRES
I was hired in the department for the fall semester of 1984. At that time, the department was not nearly as large and well-known as it is now. My Ph.D. Advisor, Terry Pratt, was the author of one of the early influential books on programming languages and I thought that would be my research area for my entire career. As it turned out, programming languages research at that time was shifting from academia to industry and there was little funding to support research in that area, so things turned out somewhat differently.
Soon after I came to UF, Gerhard Ritter transferred to our department from Mathematics, and I joined his research program. He was developing an algebraic notation that would give a firm foundation to image processing. The Image Algebra project, funded by the Armament Laboratory at Eglin AFB, was productive for a number of years. One of our students on that project was Paul Gader, who went on to work at the University of Missouri in Columbia.
Some time after Paul left, Gerhard was appointed as the chair of the department, and I took on the unenviable role of Associate Chair. In those days, it was a comprehensive job including all assignment of space, assigning teaching assistants, and dealing with accreditation.
In about 2001, Paul Gader came back to work at UF from Columbia and told me he could use some help on a project that was funded by the Countermine Division at Ft. Belvoir. I joined him in working on detecting landmines and other buried explosive hazards using data from a variety of sensors for about
the next 20 years. Systems we helped develop led route clearance personnel in Afghanistan to write to us letting us know that, for the first time, they were able to find a buried IED without their vehicle being blown up. Working on these projects was a wonderful experience because all of us were interested in just solving the problem. In the landmine group, we would talk about things, laugh at jokes we made, and realize that there were only a few hundred people worldwide who would understand why they were funny.
For many years, I was the chair of the Facilities and Equipment committee. I was the liaison for just about every CISE system administrator from Andy Wilcox up through Dan Eicher. This was an especially active role for the years between about 1985 through 2005 when the university network was still being created. For many years the ufl.edu domain was maintained by the CISE Department. Many of our students and system administrators went on to become valued members of the University IT administration and information security personnel. During those years, numerous UF network denizens would meet at the Salty Dog on Wednesdays and the notorious dog (email) list of which I am a charter member became an important communication medium for that community.
In 2011, I procured the book Secure Coding in C and C++ by Robert Seacord and taught a class based on that text. I believe our former system administrator Jim Hranicky, now of Infosec Compliance, recommended it to me. This book discussed thencurrent programming practices and the various ways the security of programs could be breached
and also preserved. I realized our students were writing the malware-ready code of the future and I thought the best way to get them interested in addressing security vulnerabilities was to give them a hands-on introduction to them. In 2012 I used my sabbatical to take courses from SANS that trained network and web penetration testing as well as malware reverse engineering. I took the knowledge I gained and employed it directly in the security classes I began teaching and continued to teach until now.
During that sabbatical, I reached out to the UF Student Infosec Team (UF-SIT) which, at the time, was a group of about 8 to 12 people who would play CTFs (information security capture-the-flag tournaments) as the Kernel Sanders team. The members of this organization were suspicious that a faculty member had come to their meeting, but I told them if they became a student government recognized organization, not only could they get money, but I figured they would be able to get at least 100 members in the next four years. We created the organization and about two years later, at the urging of John Sawyer, they started participating in the Collegiate Cyber Defense Competition (CCDC) and later DoE CyberForce and the Collegiate Penetration Testing Competition (CPTC). The team has gone on to do well at these competitions, logging second place finishes nationally and globally in DoE CyberForce and CPTC. I am proud of these teams and everything they’ve accomplished. I know that their successes depend very little on my efforts, though, because UF has been able to recruit the best students both from Florida and across the country. The thing that has given me the most satisfaction, however, has been introducing students who, like me, had no idea information security existed when they started school but then have gone on to rewarding careers working to improve our information infrastructure in critical ways.
In addition to the work with the UF-SIT team, I led our effort to be recognized as an NSA/DHS Center of Academic Excellence in Cyber Security Research and was its point of contact at UF since we were granted the designation since June of 2015.
In retirement I plan to retool in Industrial Control System security so that I can learn enough to help rural utility cooperatives protect themselves against the Advanced Persistent Threat posed by nation-state attackers. This is a real and current problem, and these organizations have neither the talent nor the expertise necessary to effectively address the problem. I hope to be able to help them avoid the disastrous consequences of remaining insecure. Whether or not I succeed in this endeavor is yet to be determined, but I’ve learned that when I embark on a
long-term project, good things usually happen. I hope that continues into the future.
By Joseph N. Wilson, Ph.D.
JOSEPH N. WILSON, PH.D. WITH CYBERFORCE TEAM
JOSEPH N. WILSON, PH.D.
JOSEPH N. WILSON, PH.D. AT KEY SIGNING
DEVELOPING AI TOOLS FOR GENETIC RESEARCH
University of Florida researchers are addressing a critical gap in medical genetic research — ensuring it better represents and benefits people of all backgrounds.
Their work, led by Kiley Graim, Ph.D., an assistant professor in the Department of Computer & Information Science & Engineering, focuses on improving human health by addressing "ancestral bias" in genetic data, a problem that arises when most research is based on data from a single ancestral group. This bias limits advancements in precision medicine, Graim said, and leaves large portions of the global population underserved when it comes to disease treatment and prevention.
To solve this, the team developed PhyloFrame, a machinelearning tool that uses artificial intelligence to account for ancestral diversity in genetic data. With funding support from the National Institutes of Health, the goal is to improve how diseases are predicted, diagnosed, and treated for everyone, regardless of their ancestry. A paper describing the PhyloFrame method and how it showed marked improvements in precision medicine outcomes was published Monday in Nature Communications.
Graim, a member of the UF Health Cancer Center, said her inspiration to focus on ancestral bias in genomic data evolved from a conversation with a doctor who was frustrated by a study's limited relevance to his diverse patient population. This encounter led her to explore how AI could help bridge the gap in genetic research.
“I thought to myself, ‘I can fix that problem,’” said Graim, whose research centers around machine learning and precision medicine and who is trained in population
genomics. “If our training data doesn’t match our realworld data, we have ways to deal with that using machine learning. They’re not perfect, but they can do a lot to address the issue.”
By leveraging data from population genomics database gnomAD, PhyloFrame integrates massive databases of healthy human genomes with the smaller datasets specific to diseases used to train precision medicine models. The models it creates are better equipped to handle diverse genetic backgrounds. For example, it can predict the differences between subtypes of diseases like breast cancer and suggest the best treatment for each patient, regardless of patient ancestry.
Processing such massive amounts of data is no small feat. The team uses UF’s HiPerGator, one of the most powerful supercomputers in the country, to analyze genomic information from millions of people. For each person, that means processing 3 billion base pairs of DNA.
“I didn’t think it would work as well as it did,” said Graim, noting that her doctoral student, Leslie Smith, contributed significantly to the study. “What started as a small project using a simple model to demonstrate the impact of incorporating population genomics data has evolved into securing funds to develop more sophisticated models and to refine how populations are defined.”
What sets PhyloFrame apart is its ability to ensure predictions remain accurate across populations by considering genetic differences linked to ancestry. This is crucial because most current models are built using data that does not fully represent the world’s population. Much
of the existing data comes from research hospitals and patients who trust the health care system. This means populations in small towns or those who distrust medical systems are often left out, making it harder to develop treatments that work well for everyone.
She also estimated 97% of the sequenced samples are from people of European ancestry, due, largely, to national and state level funding and priorities, but also due to socioeconomic factors that snowball at different levels – insurance impacts whether people get treated, for example, which impacts how likely they are to be sequenced.
“Some other countries, notably China and Japan, have recently been trying to close this gap, and so there is more data from these countries than there had been previously but still nothing like the European data," she said. “Poorer populations are generally excluded entirely.”
Thus, diversity in training data is essential, Graim said.
"We want these models to work for any patient, not just the ones in our studies," she said. “Having diverse training data makes models better for Europeans, too. Having the population genomics data helps prevent models from overfitting, which means that they'll work better for everyone, including Europeans.”
"Graim believes tools like PhyloFrame will eventually be used in the clinical setting, replacing traditional models to develop treatment plans tailored to individuals based on their genetic makeup. The team’s next steps include refining PhyloFrame and expanding its applications to more diseases.
Graim’s project received funding from the UF College of Medicine Office of Research’s AI2 Datathon grant award, which is designed to help researchers and clinicians harness AI tools to improve human health.
By Karen Dooley
"My dream is to help advance precision medicine through this kind of machine learning method, so people can get diagnosed early and are treated with what works specifically for them and with the fewest side effects,” she said. “Getting the right treatment to the right person at the right time is what we’re striving for.
KILEY GRAIM, PH.D.
LISTEN CAREFULLY
BETTER DEEPFAKE DETECTION IS ON THE WAY
University of Florida researchers recently concluded the largest study on audio deepfakes to date, challenging 1,200 humans to identify real audio messages from digital fakes.
Humans claimed a 73% accuracy rate but were often fooled by details generated by machines, like British accents and background noises.
“We found humans weren’t perfect, but they were better than random guessing. They had some intuition behind their responses. That’s why we wanted to go into the deep dive — why are they making those decisions and what are they keying in on,” said co-lead Kevin Warren, a Ph.D. student with the department of Computer & Information Science and Engineering.
The study analyzed how well humans classify deepfake samples, why they make their classification decisions, and how their performance compares to that of machine learning detectors, noted the authors of the UF paper, “Better Be Computer or I’m Dumb: A Large-Scale Evaluation of Humans as Audio Deepfake Detectors.”
The results ultimately could help develop more effective training and detection models to curb phone
scams, misinformation and political interference.
The study’s lead investigator, UF professor and renowned deepfake expert Patrick Traynor, Ph.D., has been ear-deep in deepfake research for years, particularly as the technology grew more sophisticated and dangerous.
In January, Traynor was one of 12 experts and industry leaders invited to the White House to discuss detection tools and solutions. The meeting was called after audio messages in January buzzed phones in New Hampshire with a fake President Joe Biden voice discouraging voting in the primary election.
Audio deepfakes use artificial intelligence to create recordings that mimic the sound, tones and inflections of specific people. They, like video deepfakes, have become powerful tools for scammers and political disruptors.
“Audio deepfakes are a growing concern not just within the security community, but the broader world,” noted the paper, which was published earlier this year and won a Distinguished Paper award during the Association for Computer Machinery’s Conference on Computer and Communications Security.
PATRICK TRAYNOR, PH.D. AND PH.D. STUDENT
KEVIN WARREN POSE WITH THEIR RESEARCH
Funded by the Office of Naval Research and the National Science Foundation, the study had participants listen to 20 samples each from three commonly used deepfake datasets. Their answers were compared to machine-learning deepfake detectors that considered the same samples.
When people misidentified audio samples as human voices, it was often because they underestimated technology’s ability to mimic details, such as accents and background noise.
“I do not believe I’ve ever heard a computer-generated voice with a proper English accent,” one participant noted.
Other red flags from participants:
“People do not say ‘on November twenty-two.’”
“The pausing was very jerky and unnatural.”
“The background noise felt like a static computer noise.”
Arguments for human samples included:
“I clearly hear laughing in the background, so that tells me this is being recorded live.”
“I could hear breathing and that made it sound human.”
“The speech is very enthusiastic; emotion is more of a human trait.”
Participants were not schooled in deepfakes before the study. They came in only with their instincts.
“The bias we found was humans, when they are uncertain, want to lean toward audio being real because that is what they are used to hearing,” Warren
said. “While the machine learning models want to lean more toward deepfakes because that is what they have heard a lot. On their default settings, they’re looking at them in different ways.”
Globally, deepfake fraud increased by more than 10 times from 2022 to 2023, according to Sumsub, an identity verification service. At least 500,000 video and audio deepfakes were shared on social media in 2023, according to Deep Media, a media intelligence company whose customers include the U.S. Department of Defense.
“Ultimately, we have to ask, ‘What are we trying to get these [deepfake-detection] systems to do in order to help people? One of our takeaways is that we imagine some future system that is a trained human and a trained machine, but what we see now is that the features these two parties key in on are different and not necessarily complementary,” Traynor said.
Long-term success hinges on how to build detection models that understand that human biases will slowly change.
“We’re really big on getting this out of the lab,” Traynor said. “Ideally, this would help folks in call centers, help folks when the bank calls. It will also help them when they are looking at social media and there is an audio clip of a politician.”
The name of the paper, incidentally, was pulled directly from a participant’s response, when someone was sure they had found an audio deepfake – “‘Better Be Computer, or I am Dumb.”
“Unfortunately, they were wrong,” Traynor said, laughing. “It was a human being.”
By Dave Schlenker
AMR RESEARCH Revolutionizing
Antimicrobial resistance (AMR) is a threat to the role of antibiotics in modern medicine, but currently, we don’t fully understand the intricacies of AMR. There is an urgent need for research on this topic to support advances in our understanding of AMR and the National Institutes of Health has awarded a grant to a team of scientists at the University of Florida and the University of Minnesota to help this cause.
Christina Boucher, Ph.D., an associate professor, will be collaborating with Noelle Noyes, DVM, Ph.D., an associate professor at the University of Minnesota Department of Veterinary Population Medicine, on the $3.7 million grant to conduct groundbreaking research on AMR. Their primary objective is to develop innovative techniques for enhancing the detection of AMR genes within bacterial populations, crucial for addressing the challenges posed by these genes in treating infections effectively.
Traditionally, research in this field has been divided between clinical applications targeting individual pathogens and ecological studies focusing on the broader mechanisms of AMR evolution. However, Boucher and Noyes’ research combines advancements in microbiology and bioinformatics to bridge the gap in conventional AMR research. By leveraging their combined expertise, they aim to enhance the relevance and accessibility of metagenomic data, benefiting both ecological research and practical clinical applications.
“This innovative approach seeks to revolutionize the identification and interpretation of AMR indicators in
minimal DNA samples, a task that current technologies struggle with,” Boucher said. “Unlike the conventional method of piecing together short DNA fragments, which can lead to errors, this technique reads the DNA in its entirety from the original sample, eliminating the need for patching together fragmented sequences.”
This advancement significantly boosts accuracy, especially in detecting rare resistance genes that are challenging to identify using traditional approaches. Through their collaborative efforts, Boucher and Noyes strive to empower healthcare providers to make better-informed decisions on antibiotic use and improve medical response to AMR.
“Our research has the potential to change how we detect and understand antimicrobial resistance, by providing a vital tool in monitoring resistant bacteria's spread and hopefully prolong the effectiveness of antibiotics for treating infections,” Boucher said.
By Drew Brown
Patrick Traynor, Ph.D.
CHRISTINA BOUCHER, PH.D.
PROTECTING PATIENTS ADVANCING RESEARCH
UF Researchers are working to anonymize video data in autism spectrum disorder studies.
Researchers at the University of Florida are tackling a significant challenge in the diagnosis and treatment of autism spectrum disorder (ASD) in children. The challenge is clinician availability, which limits access to diagnosis and care. However, advances in computational methods and crowdsourcing hold promise for expanding the availability of treatment, but to achieve this, researchers need access to video data on children with ASD. When working with video data, patient privacy becomes a key concern.
Ensuring the confidentiality of patient data is crucial when sharing video recordings that contain sensitive information, such as the patient’s face and voice. The standard approach to anonymization involves removing or modifying personal identifiers, which is easily achieved with text data. However, with video data, the challenge is more complex. Anonymizing facial and vocal identifiers without altering the critical details needed to understand ASD-specific behaviors has traditionally been an issue.
To overcome this challenge, Eakta Jain, Ph.D., principal investigator and an associate professor at the UF Department of Computer & Information Science & Engineering, and Kevin Butler, co-investigator and a professor at the UF Department of Computer & Information Science & Engineering, have received a $3.7 million grant from the National Institutes of Health to develop innovative solutions. Jain plans to use the latest in AI based models to modify the faces and voices of patients while retaining the essential information needed to quantify ASD-associated behaviors. This approach aims to create a privatized data set of video recordings that balances patient confidentiality with researcher access.
According to Jain, existing algorithms for anonymizing audio and video data, such as blur filters, are not effective for retaining gaze and facial expression, which
are critical details when clinicians screen for autism.
“Though there is a large body of work in anonymization in both audio and video processing, it is yet unknown how well existing algorithms obfuscate identity while retaining autism-specific atypicality,” Jain explained.
With a privatized data set, Jain hopes to facilitate future research and provide a valuable resource for expanding access to data resources for clinician education and computational research. This could ultimately lead to improved diagnosis and care for children with ASD, making treatment more accessible and effective.
By Drew Brown
EAKTA JAIN, PH.D.
EAVESDROPPING A NEW SECURITY THREAT
The ghostly woman’s voice pipes through the speakers, covered in radio static but her message intact from beyond — “The birch canoe slid on the smooth planks.”
A secret message from the other side? A spectral insight?
No, something much spookier: Voice recordings captured, secretly, from the radio frequencies emitted by ubiquitous, cheap microphones in laptops and smart speakers. These unintentional signals pass, ghost-like, through walls, only to be captured by simple radio components and translated back to static-filled — but easily intelligible — speech.
For the first time, researchers at the University of Florida and the University of Electro-Communications in Japan have revealed a security and privacy risk inherent in the design of these microphones, which emit radio signals as a kind of interference when processing audio data.
The attack could open up people to industry espionage or even government spying, all without any tampering of their devices. But the security researchers have also identified multiple ways to address the design flaw and shared their work with manufacturers for potential fixes going forward.
“With an FM radio receiver and a copper antenna, you can eavesdrop on these microphones. That’s how easy this can be,” said Sara Rampazzi, Ph.D., a professor of computer and information science and engineering at UF and coauthor of the new study. “It costs maybe a hundred dollars, or even less.”
They used standardized recordings of random sentences to test the attack, giving the eerie impression of a ghostly woman talking about canoes or imploring you to “Glue the sheet to the dark blue background.” Each nonsense sentence instantly recognizable despite, in some cases, passing through concrete walls 10 inches thick.
The vulnerability is based on the design of digital MEMS microphones, which are widespread in devices like laptops and smart speakers. When processing audio data, they release weak radio signals that contain information about everything the microphone is picking up. Like other radio signals, these transmissions can pass through walls to be captured by simple antennas.
Even when someone is not intentionally using their microphone, it can be picking up and transmitting these signals. Common browser apps like Spotify, YouTube, Amazon Music and Google Drive enable the microphone sufficiently to leak out radio signals of anything said in the room.
The researchers tested a range of laptops, the Google Home smart speaker, and headsets used for video conferencing. Eavesdropping worked best on the laptops, in part because their microphones were attached to long wires that served as antennas amplifying the signal.
Rampazzi’s lab also used machine learning-driven programs from companies like Open AI and Microsoft to clean up the noisy radio signals and transcribe them to text, which demonstrated how easy it would be to then search eavesdropped conversations for keywords.
However, a series of fairly simple changes could greatly decrease the effectiveness of the attack. Changing where microphones are placed in laptops could avoid long cables, which can amplify for the radio leakage. Slight tweaks to the standard audio processing protocols would reduce the intelligibility of the signals.
The researchers have shared these ideas with laptop and smart speaker manufacturers, but it’s not clear if the companies will make the upgrades in future devices.
By Eric Hamilton
Innovations IN AI JOURNAMLISM
Alexandre Gomes de Siqueira, Ph.D., an asisstant professor, is co-supervising new interdisciplinary research exploring the application of generative artificial intelligence in journalism.
The collaborative efforts between UF and Brazil’s Federal University of Uberlândia (UFU) highlight how global partnerships can drive innovation in emerging fields.
Conducted under guidance from UFU professors Pedro Franklin and Marcelo Marques, the research investigates Prompt Journalism — an experimental approach that tasks journalists with using AI as a collaborator to write prompts, analyze responses and refine content in real time. The framework fosters a dialogue with AI that is conversational, interactive and technologically supported, ultimately improving the quality of news production and information consumption.
In May, the collaboration earned top recognition at INTERCOM, the largest communication congress in Latin America and hosted by the Brazilian Society of Interdisciplinary Studies in Communication. Gustavo Henrique de Souza Medrado, a master’s degree
student at UFU co-advised by Siqueira, won first place in the INTERCOM Communication Award for Social Transformation for his work in Campinas, São Paulo, Brazil.
This recognition underscores the potential of interdisciplinary and international partnerships and highlights the evolving role of AI in shaping the future of journalism and the dissemination of information worldwide.
By Paris Carter
INVENTION OF THE YEAR
Prabhat Mishra, Ph,D, a distinguished professor, has a technology with the potential to revolutionize quantum computing and cybersecurity. His current research emphasizes the development of quantum error correction codes and quantum circuit modeling, both critical for protecting quantum information from errors caused by noise and decoherence.
Mishra’s invention includes understanding weaknesses and designing systems that effectively mitigate the associated risks. As cybersecurity threats become increasingly sophisticated, this work is crucial and emphasizes the growing need for protective measures.
By Drew Brown
ALEXANDRE GOMES DE SIQUEIRA, PH.D. AT INTERCOM
PRABHAT MISHRA, PH.D.
NATIONAL CYBERSECURITY COMPETITION
The University of Florida Student Information Security Team (UFSIT) placed second in the Global Collegiate Penetration Testing Competition (CPTC) in early January, after winning the Regional Collegiate Penetration Testing Competition in November. In addition, the team won the “Cloud 9” and “Best Slide” category awards for their explanation of how an attacker could use information extracted from cloud resources to exploit corporate email and extract sensitive corporate and customer information.
The Global CPTC organizers led a team of 70 computing professionals who volunteered to develop the infrastructure for a social media company and staff its personnel and security operations center. The teams identified vulnerabilities in the company’s software, systems, and network communications as well as the resistance of its personnel to software engineering attacks such as phishing and vishing. The students then spent two days working on the technical test, employing offensive security tools to identify and exploit vulnerabilities to find out what security breaches can be caused by opportunistic attackers.
The teams are then given six hours to complete their comprehensive technical report detailing the company’s strengths, weaknesses, and recommendations for how to remediate the security vulnerabilities.
Dakota State University won first in the competition, held
at the Global Cybersecurity Institute in Rochester, New York, with Penn State placing third.
This year’s team was captained by Ayden Colby and comprised of Adam Hassan, Yuliang Huang, Daniel Aguirre, Avigail Laing and Tristan Ratchev. Ben Ruddy and Naresh Panchal served as alternates. The team was coached by Joseph N. Wilson, Ph.D., a professor.
“This team did an amazing job of getting ready for the competition and their performance reflects that. They ‘ve shown all the members of the UFSIT organization how dedication and preparation yield tangible rewards,” Wilson said.
If you would like to learn more about UFSIT, you can visit their website at https://ufsit.club.
By Drew Brown
DEPARTMENT OF NAVY CHALLENGE
A team of computer science students participated in the Cyber Resiliency and Measurement Challenge (CRAM) by the Naval Surface Warfare Center Dahlgren Division (NSWCDD). The goal of the challenge was to develop algorithms to measure a system’s cyber defensive capabilities against a full spectrum of threats and to calculate the probability of failure against various threats.
Participants were instructed to use artificial intelligence and machine learning to design and train algorithms that automate the cyber resiliency assessment. The team, among 14 from various universities selected to compete, consisted of Ozlem Polat, Samson Carter, Andrew Ballard and Shayan Akhoondan, all undergraduate students in
the Department of Computer & Information Science & Engineering.
CRAM consists of three phases culminating in an inperson demonstration that was judged by a panel of representatives from NSWCDD.
The students made it to phase two where they were invited to begin developing the proposed tool. They had five weeks to develop the models and algorithms for judging in the third phase of the competition held at the University of Mary Washington Dahlgren Campus. While this team did not win the challenge, they were thrilled by the opportunity to apply their knowledge in real world situations.
JOSEPH N. WILSON, PH.D. AND THE UFSIT TEAM
THE FUTURE OF STEGANOGRAPHY
Steganography, the art of concealing secret messages within other pieces of text or media, has been in practice for centuries. Its history dates to Ancient Greece, but the first recorded use of the term was in 1499 in a book disguised as a book about magic titled Steganographia. Even today, this method remains relevant, with people altering images to hide messages by changing pixels and encoding the information. A computer can then compare the two images and decipher the hidden message for the recipient.
However, traditional methods of using steganographic images have a significant flaw: an adversary can find the original image, making it possible to compare it with the image containing the hidden message. This comparison allows them to decode the message and potentially compromise the sender’s security. Luke Bauer, a Ph.D. student at the University of Florida conducting research at the Florida Institute of Cybersecurity Research (FICS), is working to eliminate this vulnerability by developing an innovative approach that leverages artificial intelligence (AI) image generation.
Bauer’s innovative research focuses on using AI to create images that already contain hidden messages, eliminating the need for a comparison between the original and encoded images. This means that a message sender can upload the hidden message to public platforms without arousing suspicion, making it difficult for an adversary to identify the post containing the encoded message. Meanwhile, the message receiver can decipher the hidden message using the same AI model, without requiring specialized software.
Bauer is driven by the real-world applications of his research, particularly in situations where people face danger or live under oppressive regimes with strict censorship. He hopes that his work can provide a valuable tool for those seeking to communicate secretly. Bauer’s initial research focused on steganography through text utilizing large language models was conducted as part of the DARPA Resilient Anonymous Communication for Everyone (RACE) project, in partnership with Galois, a company that leverages research to deliver solutions and tools that increase security, reliability and operational efficiency.
Galois has continued to develop the project and plans to release an app, making it accessible to a broader audience. Applying the knowledge gained from this research to images was a natural next step in the process of furthering steganographic techniques through the use of AI.
“Generative models were just beginning to take off when I began my Ph.D. Although there were obvious uses for them as personal or commercial tools, I wished to examine how they could be used to help people,” Bauer said. “Through my research here at FICS, I was able to discover and improve ways that these models could be used to protect people’s privacy, freedom of expression and safety.”
Bauer’s fascination with cybersecurity began during his undergraduate studies at Duke University, where he took a class on hardening systems against attacks. This interest led him to work on steganography with his advisor, Vincent Bindschaedler, Ph.D., an assistant professor in the UF Department of Computer & Information Science &
Engineering.
“Despite numerous challenges in his Ph.D. journey, Luke has demonstrated his unwavering commitment to research. In fact, his unending efforts and tenacity were instrumental for the project’s success,” Bindschaedler said.
Bauer’s previous research explored another form of steganography by hiding messages in text through large language models. He plans to continue this line of research by incorporating steganographic audio and video AI generation, further expanding the possibilities of secure communication.
“In my research, I have strived for usability and theory that holds up in real-world use cases,” Bauer said. “I hope that one day my research will be used by people around the world.”
By Drew Brown
EDMEDIA 2025 OUTSTANDING PAPER
A team from the Department of Computer & Information Science & Engineering (CISE) recently received an outstanding paper award at the EdMedia conference for research that personalizes math word problems to bridge cultural gaps in education.
Called “InterestMe Math: A Math Word Problem Rewrite System Integrating Career Interests to Enhance Learning Outcomes,” the University of Florida engineering research project is about incorporating students’ personal career aspirations to help navigate math word problems and enhance classroom engagement.
Co-authored by a CISE team that includes department chair and distinguished professor Juan E. Gilbert, Ph.D., the paper is the foundation for DeKita Rembert’s doctoral dissertation and reflects a deep commitment to helping students feel connected.
Grounded in the expectancy-value theory framework, the research aims to enhance math comprehension and learning outcomes. Researchers conducted a study with 29 fifth- and sixth-grade students, comparing traditional math problems to rewritten ones integrating themes based on student career aspirations. The model was designed with input from students and teachers using culturally relevant content and drawing on more than 600 career paths.
By Paris Carter
AUDIO DEEPFAKES & THE HEARING IMPAIRED
When University of Florida doctoral student Magdalena Pasternak realized the voice urging her to take a phone survey was not human, she not only hung up but was left with nagging questions, leading her to launch a research project.
“The caller requested a moment of my time but then abruptly decided to send me an email,” she recalled. “It was at that moment I realized I was not speaking to a person, but an artificial voice intentionally designed to obscure its synthetic nature.”
So inspired, Pasternak, a student and a researcher in the Florida Institute for Cybersecurity Research (FICS), led a study with other UF researchers examining how audio deepfakes threaten hearingimpaired people who rely on cochlear implants (CIs).
Ultimately, the study indicates the need for enhanced deepfake detection systems in implants for the hearing impaired.
“Deaf and hard-of-hearing populations, especially cochlear implant users, perceive audio in very different ways from hearing persons,” said CISE Professor Kevin Butler, Ph.D., FICS director and lead faculty investigator on the study.
“Conventional wisdom,” he added, “would indicate that CI users in particular would find it challengingto-impossible to differentiate natural from synthetic speech. Surprisingly, we found that not all deepfakes are the same. Certain types of deepfake attacks are particularly problematic. This is where our focus on detection, particularly for this community, should be placed as we develop and implement solutions. Some of these alert mechanisms could potentially be built into assistive devices in the future.”
Butler contends this study is the first to examine how CI users perceive audio deepfakes and is one of the largest academic studies examining their hearing perceptions based on the number of study participants.
It focused on how CI users respond to audio deepfakes, or artificial audio generated by artificial intelligence, as well as how computer-based audio deepfake detectors performed against audio as perceived by CI users.
The results: Study participants without CIs were able to identify deepfakes with 78% accuracy, while CI users achieved only 67% accuracy. CI users also were twice as likely to misclassify deepfakes as real speech.
Advancements in hearing technology, such as cochlear implants, have significantly improved the accessibility for individuals with hearing loss, enabling them to engage more effectively with audio-based interfaces like smartphones and voice assistants. Unfortunately, new technology brings new risks to vulnerable members of the population.
The study was an interdisciplinary collaboration with Professor Susan Nittrouer, Ph.D., who leads the Speech Development Laboratory in UF’s College of Public Health and Health Professions (PHHP) and is an internationally renowned expert in perceptual processing of speech.
The work was recently published in the paper “Characterizing the Impact of Audio Deepfakes in the Presence of Cochlear Implant Simulated Audio,” which was presented at the Network and Distributed System Security Symposium earlier this year in San Diego.
Pasternak’s research focuses on security in large language models, deepfake detection and machine learning. Her work is supported by the Center for Privacy and Security of Marginalized and Vulnerable Populations (PRISM), a National Science Foundation project directed by Butler that aims to transform how the security community addresses the specific cybersecurity needs of vulnerable populations by developing tools and methods at the core of cybersecurity research and technology design.
Generative audio technologies now can create persuasive, human-sounding audio. While this technology is commonly used with smart-device voice assistants Alexa, Cortana and Siri, it is considered a deepfake when used for malicious purposes.
Deepfakes have been used to breach confidentiality, extort money and spread misinformation. During the 2024 New Hampshire Demographic primary, over 20,000 voters received robocalls impersonating President Joe Biden, telling them not to vote.
CIs restore hearing by converting sound into electrical signals stimulating the auditory nerve. CIs use a limited number of electrode channels, prioritizing speech-relevant frequencies and compressing sound, which reduces nuances in speech aspects like pitch. Therefore, CI users may need to depend on alternative auditory cues.
Pasternak and her research team recruited 87 people with good hearing and 35 CI users from the United Kingdom and the United States to evaluate and compare their deepfake-detection accuracy by answering the following questions: 1) How susceptible are CI users to audio deepfakes? 2) How effective are automated models on CI-simulated audio? 3) Can these models be used as substitutes for CI users?
Researchers used the Automatic Speaker Verification spoofing (ASVspoof) detection database, which
incorporates two prevalent deepfake-generation techniques: speech synthesis and voice conversion. Speech synthesis (Text-To-Speech – TTS) generates human-like speech from text, though it may lack natural human speech characteristics. Voice conversion (VC) modifies existing human speech samples to resemble a target voice that preserves natural speech characteristics.
While CI users were able to detect TTS deepfakes, VC deepfakes were a greater challenge. CI users were far more likely to consider them authentic rather than artificial.
Though current deepfake models do not completely capture the specific challenges faced by CI users, improving proxy models could enhance deepfake detection systems.
“Deepfake technology has advanced so rapidly in such a short span of time that many people are simply unaware of how sophisticated modern deepfakes have become,” Pasternak said. “While education and awareness programs can play a huge role in protecting users, we must also develop technological solutions that accommodate diverse auditory processing abilities. By doing so, we can ensure that all users, especially those with auditory implants, have the necessary defenses against ever-evolving threats.”
By Timothy Brown
MAGDALENA PASTERNAK SPEAKING AT NDSS SYMPOSIUM
2025-2026 CISE Scholarship & Award Recipients
The department congratulates the following award and scholarship winners. These students were selected by the awards committee based on their outstanding academic performance and significant contributions to society.
LAC Scholarship
Roberto Carrero Salazar • Laurence Georges • Daniel Gomez • Patrick Leimer • Julio Leonardi • Daniel Monzon • Alexis Morales Amaro • Jorge Ramirez • Vivian Rincon
CISE Scholarship
Jack Gordon • Kevin Jin • Camila Menendez • Rohan Pherwani • Nicolas Slenko • Yuyang Sun
Nieten Award for Undergraduate Students
Sara Lin
Gartner Group Scholarship
Saeed Ansari • Luke Barcenas • Jason Chen
Marry and Heather Abbot Scholarship
Samantha Bennett • Ananya Sista
Matthew Martin Memelo Memorial Scholarship
Kenneth Chew
UNDERGRADUATE GRADUATE
L3Harris Corporation Graduate Fellowship
Mollie Brewer • Sarah Brown • Jennifer Sheldon • Patriel Stapleton
Gartner Group Information Technology Scholarship
Jayetri Bardhan • Sri Hrushikesh Varma Bhupathiraju • Xuan Nhat Hoang
Sherin Thomas, (MS ’12), is a software engineering leader, artist, and climate change advocate with an impressive 15-year career at prestigious companies like Slack, Google, Twitter, Netflix and Lyft. A technical expert in low-latency, high-throughput data processing with several patents, she is a sought-after speaker at international conferences (including a keynote address) and serves on program committees of international conferences. Thomas also contributes to UF’s computer science advisory board and has been a featured author in notable publications, such as the InfoQ AI/ML and Data Trends report, which influences the technical strategies of major software companies.
Thomas is deeply involved in social impact causes, particularly climate change and promoting diversity in STEM. She collaborated with NASA and led the development of an AI-powered tool to automate the detection of weather patterns, saving scientists thousands of hours of data prep. This project, involving students from UF’s Women in Computer Science, led to open-source contributions, publications and career opportunities for those involved, including a mentee who joined Blue Origin.
Currently, Thomas leads the Aicacia lab at Collaborative Earth, focusing on scaling ecological restoration using AI. She collaborates with ETH Zurich to develop specialized language models for restoration methodologies, further advancing her commitment to mitigating climate change.
Dan Rua (BS ‘91)
Admiral the Visitor Relationship Management (VRM) Company, helps thousands of digital publishers worldwide grow visitor relationships and revenue. Admiral’s AI-powered SaaS platform solves several monetization challenges digital publishers face, with visitor journeys that include: registration walls, paywalls and paid subscriptions, donation management, advanced adblock analytics and revenue recovery, GDPR/GPP privacy consent, email acquisition, 1P data collection, social growth, and more. Websites growing stronger visitor relationships with Admiral’s platform include CBSSports.com, CNBC.com, NYPost.com, Weather.com, and more.
Awards
This spring, the Department of Computer & Information Science & Engineering held its inaugural Alumni Awards ceremony, which featured four awards focused on honoring the extraordinary achievements of CISE alumni. These awards celebrated individuals who have made significant contributions to their field of expertise, demonstrated outstanding personal accomplishments, or shown exceptional service to our department and the college.
Pedro Guillermo Feijóo-García, Ph.D.
DISTINGUISHED ALUMNI AWARD FOR ACADEMIC EXCELLENCE
Feijóo-García was recognized for his outstanding contributions to the field of education, research, and human-centered computing. He is a lecturer at Georgia Tech’s School of Computing Instruction. A Colombian scholar and Fulbright alumnus, he teaches software design, engineering, and human-computer interaction. He has taught numerous courses in human-centered computing and programming in both the U.S. and Colombia. Previously, he served as a faculty member at the University of Florida and Universidad El Bosque, where he received multiple teaching awards.
At Georgia Tech, he was named to the 2024 Course Instruction Opinion Survey Honor Roll for teaching excellence. He leads the PARCE Lab, researching computer science education and human-AI interaction. In 2024, he secured a $2 million National Science Foundation grant as principal investigator for a project on hidden curricula in computing education. Feijóo-García holds degrees in systems and computing engineering, mechanical engineering, and a Ph.D. in human-centered computing from the University of Florida.
Amit Dhurandhar, Ph.D.
DISTINGUISHED ALUMNI AWARD FOR CAREER ACHIEVEMENT
Dhurandhar was recognized for his exceptional contributions to the field of artificial intelligence. His research focuses on understanding AI through its statistical and output behaviors, with applications across diverse industries such as healthcare, retail, and semiconductor manufacturing. His current work emphasizes enhancing trust in AI systems, gaining recognition in top venues like NeurIPS and media outlets such as Forbes and PC Magazine. He has contributed to groundbreaking research in olfaction, with publications in Science and Nature Communications, drawing extensive media attention.
Dhurandhar’s achievements include multiple awards such as The Association for the Advancement of Artificial Intelligence Deployed Application Award and Best of ICDM. He co-led the development of IBM’s AI Explainability 360 open-source toolkit and has been an invited speaker at notable events like ACM CODS-COMAD 2021. Dr. Dhurandhar’s contributions have also influenced IBM products, earning him prestigious internal awards. He actively contributes to the AI community through roles in top conference committees, National Science Foundation panels, and IBM’s invention disclosure team, and is listed in Marquis Who’s Who 2024.
Chris S. Crawford, Ph.D.
DISTINGUISHED YOUNG ALUMNI AWARD
Crawford was recognized for his outstanding contributions to the field of Computer Science and his pioneering research in human-robot interaction and Brain-Computer Interfaces. He is an associate professor at the University of Alabama’s Department of Computer Science. He directs the HumanTechnology Interaction Lab. His research focuses on human-robot interaction and Brain-Computer Interfaces. He has investigated systems that provide computer applications and robots with information about a user’s cognitive state. He previously developed a brain-drone racing system that was featured on over 800 news outlets, including Discovery, USA Today, the New York Times, and Forbes. Along with investigating brain-robot interaction applications, Crawford also developed Neuroblock, a tool designed to engage K-12 students in neurofeedback application development. He has received multiple awards for his research, including the National Science Foundation CAREER award.
Dustin Karp
DISTINGUISHED ALUMNI AWARD FOR ENTREPRENEURSHIP
Karp’s vision and entrepreneurial spirit have made a significant impact in both the business and education sectors. While studying at the University of Florida, Karp founded Perch, a marketplace connecting Gator fans with local driveways and businesses for game day parking. What began with flyers and a street sign has scaled to thousands of reservations across 10 college campuses, becoming a core part of the game day experience for fans across the country.
Also while at UF, Karp co-founded Edugator, a platform reinventing how computer science is taught. Built with Amanpreet Kapoor, an assistant instructional professor in the UF Department of Computer & Information Science & Engineering, Marc Diaz, and Prayuj Tuli (both students at the time), Edugator enables instructors to create hands-on, interactive courses. It’s powered by AI tools for grading, personalized tutoring, and automatic problem generation. To date, it’s helped thousands of learners and graded over 100,000+ submissions. Karp has spent time in Gainesville, Boston, San Francisco, and New York—currently building product at Polymarket in New York City.
P.O. BOX 116120
GAINESVILLE, FL 32611
WWW.CISE.UFL.EDU / @UFCISE @UF_CISE
UF ENGINEERING GRADUATES SMILE FOR THE CAMERA DURING SPRING COMMENCEMENT