NCURA MAGAZINE

![]()





With



Natural Language Search
AI Generated Insights
Email Notifications
Suggested Collaborators
Slack Integration Teams Integration
Learns from YOU
Much More!


Navigating the Grant Submission Process in the Time of Artificial Intelligence By Brandi Stephens, Vinita Bharat, and Liz Seckel 4
Your Next Research Assistant Might Be a Chatbot By Robert Pilgram and Dustin Schwartz 8




Find NCURA on your favorite social media sites!
Embracing the Crossroads: How New Research Administrators Can Lead the Way in AI Adoption
By Kathleen Halley-Octa and Kari Woodrum 14
Transforming Research Administration with Artificial Intelligence: Creating New Efficiencies and Supporting Data-Driven Decision Making
By David Richardson and David Smelser 17
AI: Research Administrators Meeting the Moment 21
Have You Forgotten How to Fly? By Lamar Oglesby 22
How USC Harnessed AI to Transform Sponsored Contracts Data into Strategic Insight By Lumi Bakos, Emily Devereux, and Alex Teghipco .............. 25
Navigating a Shifting Landscape: Institutional Strategies in Response to Federal Research Policy Changes
By Michelle D. Christy ........................................................ 33
Working at a Predominantly Undergraduate Institution (PUI) or Other Small Research Administration (RA) Office? Say Hello to Your New AI Coworker By Claudia Scholz 36
Beyond Personal Productivity: Scaling Artificial Intelligence for Institutional Research Administration Workflows
Barrie D. Robison, Nathan Layman, Jason Cahoon, Luke Sheneman, Sylvia Bradshaw, Katie Gomez, Nathan
Research Administration in the Middle East and North Africa: Artificial Intelligence and the Future of Research Administration:

IN THIS ISSUE: Artificial Intelligence is no longer theoretical—it’s here, and it’s transforming research administration. This issue of NCURA Magazine explores how AI tools are reshaping our work across multiple dimensions, from grant development to compliance oversight, and strategic planning.
To begin, Brandi Stephens, Vinita Bharat, and Liz Seckel, in “Navigating the Grant Submission Process in the Time of Artificial Intelligence,” examine how AI can enhance the grant writing process. They highlight ways AI can support editing, simulate proposal reviews, and expand access for applicants with limited resources—while also cautioning against overreliance, especially when it comes to interpreting eligibility or compliance criteria.
Building on this foundation, Robert Pilgram and Dustin Schwartz explore the role of conversational AI in “Your Next Research Assistant Might Be a Chatbot.” Their article describes how chatbots trained on institutional policies can efficiently answer repetitive questions, streamline proposal support, and empower junior faculty with timely guidance— offering a glimpse into the future of scalable, user-centered administrative assistance.
Lori Ann Schultz’s “AI: Research Administrators Meeting the Moment” adds a call to action. She underscores how research administrators can harness generative AI to automate transactional tasks, freeing up time for more strategic support of researchers. Schultz emphasizes that the adoption of AI doesn’t diminish our role—it expands it, allowing professionals to lead with insight, creativity, and purpose.
Finally, David Richardson and David Smelser’s “Transforming Research Administration with Artificial Intelligence” provides a strategic lens. They explore how predictive analytics, automated compliance monitoring, and AI-powered dashboards can help institutions make smarter decisions, reduce risk, and better allocate resources.
Together, these articles make a compelling case: AI isn’t replacing research administrators—it’s empowering them. By offloading routine tasks and uncovering deeper insights, AI allows professionals to focus on high-value, strategic contributions. AI: The Next Frontier offers a roadmap for navigating this pivotal transformation. N
Author’s Note: In the spirit of embracing AI, the introduction to this issue was written with the assistance of ChatGPT.

Kathleen Halley-Octa, MA, CRA, is a Co-Editor for NCURA Magazine and serves as a Manager with Attain Partners. She has worked in research administration for more than a decade and has experience at both the central and departmental level. She can be reached at kmhalleyocta@attainpartners.com.
SENIOR EDITOR
Tanta Myles
Georgia Tech
CO-EDITORS
Tolise Dailey
Georgetown University School of Medicine
Kathleen Halley-Octa
Attain Partners
Martin Williams
Vaughn College of Aeronautics and Technology
CONTRIBUTING EDITORS
Career Development
Lamar Ogelsby
Rutgers University
Robyn Remotigue
University of North Texas HSC at Fort Worth
Clinical/Medical
Christina Stanger
MedStar Health Research Institute
Collaborators
Anthony Beckman
University of Rochester
Lisa Mosley
Yale University
Compliance
Jeff Seo
Northeastern University
Stacy Pritt
Texas A&M University System
Contracting
Beth Kingsley
Yale University
Laura Letbetter
Georgia Tech
Departmental Research Administration
Kelly Andringa
University of Iowa College of Medicine
Jennifer Cory
Stanford University
Diversity, Equity, & Inclusion
Sheleza Mohamed
American Heart Association
Laneika Musalini
Metropolitan State University of Denver
Financial Research Administration
Erin Bailey
University at Buffalo Clinical and Translational Science Institute
Brian Miller
Emory University
Global - Africa
Josephine Amuron
African Center for Global Health and Social Transformation
Global - Asia Pacific
Lisa Kennedy
University of Queensland
Global - Europe
Joey Gaynor
Trinity College Dublin
Kirsi Reyes-Anastacio
University of Helsinki
EXECUTIVE EDITOR
Marc Schiffman
NCURA
COPY EDITORS
Beth Jager
Claremont McKenna College
Jeanne Kisacky
Cornell University
Paulo Loonin
Duke University School of Medicine
Robin Ruetenik
University of Iowa
Global - Middle East
Reem Younis
United Arab Emirates Ministry of Education
Global - U.S.
MC Gaisbauer
University of California-San Francisco
Christopher Medalis
School for International Training
Pre-Award
Wendy Powers
University of Maine
Trisha Southergill
Colorado State University
Predominantly Undergraduate Institutions
Magui Cardona
University of Baltimore
Michelle Gooding
Frederick Community College
Research Development
Camille Coley
University of San Francisco
Self-Care
Rashonda Harris
Johns Hopkins University
Kim Moreland
University of Wisconsin - Madison
Senior Administrator
Lisa Nichols
University of Notre Dame
Lindsey Spangler
Duke University School of Medicine
Spotlight on Research
Derek Brown
Stanford University
Systems/Data/Intelligence
Thomas Spencer
University of Texas Rio Grande Valley
Dan Harmon
University of Illinois Urbana-Champaign
Training Tips
Helene Brazier-Mitouart
Weill Cornell Medicine
Work Smart
Hagan Walker
Prisma Health
Claire Stam
Prisma Health
Young Professionals
Carol Bitzinger
Ohio State University
Katie Gomez Freeman
Southern Utah University


By Denise Moody, NCURA President
elcome to our August issue recognizing the future of AI: The Next Frontier. At the time of this issue, NCURA will be hosting its Third Annual AI Symposium in Washington, DC on August 9th. Program Chairs Nancy Lewis, Lori Ann Schultz, and Thomas Spencer have put together an extremely robust program that highlights the impact of AI in our research administration profession, research community, and sponsors. This symposium precedes NCURA’s 67th Annual Meeting of the membership with the theme Forward Focused: Priming for Change, further demonstrating NCURA’s commitment to be strategic and looking forward.
In keeping with the Magazine theme, I asked ChatGPT what it means to consider AI: The Next Frontier in relation to our profession. The response emphasized “recognizing the transformative potential that AI has to improve various aspects of research management and administration” with key areas including “enhanced efficiency, data analysis, grant management, predictive analytics, improved communication, and ethical considerations and compliance”. Throughout many of this year’s NCURA webinar series on the Changing Federal Landscape, it has been said that the continuously increasing regulatory changes and compliance requirements partnered with the institutional budget constraints during 2025, NCURA’s members are often faced with doing more work with less resources. Utilizing AI to reduce our own administrative burden while enhancing our ability to comply is needed now more than ever.
During this challenging year, NCURA continues to focus on our mission to support and advance the research administration profession. In addition to our publicly available offerings (webinars, Changing Federal Landscape Collaborate Community, and monthly Career Center), NCURA’s Bridge Fund was launched (www.ncura. edu/MembershipVolunteering/Volunteering/NCURABridgeFund. aspx). This Fund was initiated by our 2025 Pre-Award Research Administration Conference keynote speaker, Dr. Andrea Hollingsworth, who donated her speaker fees to assist research administrators whose jobs were impacted due to funding changes. The Fund offers training, coaching, and a free year of NCURA membership to help individuals transition into new roles and provide them with the skills and resources needed for a successful job search. It is offered to research administrators or federal government employees whose activities are related to the administration of sponsored programs in areas of grants management, policy and administration.
“Utilizing AI to reduce our own administrative burden while enhancing our ability to comply is needed now more than ever.”
In September, NCURA’s current and future national and regional leaders will attend a leadership convention to develop a strategic “roadmap” on membership engagement and the “next frontier.” Information has been gathered over the past few months from several stakeholders on assessing trends and shifts in volunteer engagement, volunteer structure and support benefits and challenges, and ways to envision an optimal future of volunteer engagement.
As we navigate these challenging times together, let’s continue to uplift one another and foster our sense of community. I am grateful for the resilience, collaboration, and even good humor in stressful times that NCURA members continue to provide each other. We have the power to shape our experiences and support one another through every challenge. Thank you for your commitment to our mission, and I encourage everyone to take care of yourselves and each other. N
Most Sincerely,



SBy Brandi Stephens, Vinita Bharat, and Liz Seckel
ince the public release of ChatGPT, artificial intelligence (AI) has rapidly become integrated into nearly every aspect of our daily lives, including academic grant writing. An increasing number of the principal investigators (PIs) we support have been asking us how they can use AI to aid in the grant writing process. As professionals who support scientific development, we believe writing is a deeply personal process and that the final product is best when imbued with the ideas, style, and personality of the writer. The iterative process of drafting and refining one’s work also helps develop scientific writing skills (Quitadamo and Kurtz, 2007), which are essential for a successful long-term career in academia. However, we also believe that PIs and research administrators can benefit immensely from including AI in this process, especially as the algorithms powering these systems improve and become widely available.
Here are five ways that AI can support research administrators and their PIs in the grant application process.
1 Check sponsor and institutional guidelines regarding AI use. Before using AI, you should always check with the sponsor and your institution to ensure AI is allowed and, if so, note any restrictions regarding its use and disclosure. For example, the NIH issued a policy on July 17th, 2025 (NOT-OD-25-132) that prohibits applications that are substantially developed with AI. Whether the sponsor requests this or not, we recommend disclosing any AI usage in grant applications. If there are any questions about the policy or if one is not available, reach out to the program officer or your institutional official for clarification.
2 Consider data storage and privacy settings. All publicly available AI chatbots, including ChatGPT, save the prompts and conversations to ‘improve’ their algorithms. Research proposals often contain highly sensitive information—from initial drafts through final submission. Putting any of this precious text into a public chatbot runs the risk of the
AI incorporating the ideas and approach into its model. The AI could then suggest these ideas to other users, including competitors! Check if your institution has a preferred, secure AI platform that does not save inputted prompts and text. If so, that is the only AI platform you and your PIs should use. If not, reach out to your institutional officials and make them aware of the need for such guidance and recommendations. AI use will only increase in the future, and institutions need to adapt, or else risk the potential security concerns that could result from improper use of AI. It’s also important to remind PIs that even metadata, such as file names or associated researcher names, may be inadvertently exposed in public AI tools.
3 Use it to edit text. AI can enhance text editing processes, improving readability by simplifying technical terms and jargon-laden paragraphs, and correcting grammar and refining sentence structure. You and your

PIs can leverage AI to maintain consistency and clarity throughout the application, ensuring that the final document is coherent and professional.
Careful, though—one major disadvantage of using AI is its tendency to generate false information (known as “hallucinations”, to use the technical term), which can lead to factual errors and the spreading of misinformation. The risk of AI-generated text being copied verbatim from existing sources without any proper attribution or permission can lead to major research integrity and privacy concerns. Not to mention embarrassment and loss of professional reputation. Furthermore, relying on the automation of AI-generated content will hinder applicants from developing the grant writing skills needed for a long-term career in academia.
On the topic of hallucinations: we strongly caution against using AI to identify eligibility criteria, required documents or sections, and compliance risks at this point, as tempting as it may be. When we experimented and asked AI to generate a list of required documents for a grant, it took us more time to check the list for accuracy than it would have taken to make it ourselves. AI is programmed to sound convincing; it is not programmed for accuracy. Until the accuracy of the generated content from AI improves, we believe it is too error-prone for these critical tasks. As research administrators know, a small administrative error can lead to the entire grant being rejected.
AI is programmed to sound convincing; it is not programmed for accuracy.
4 Use AI to review the grant proposal. AI can roleplay as a grant reviewer, which can be especially helpful for those with limited resources or time. Just as with human reviewers, AI performs best when provided detailed instructions on what to focus on in a review. An AI prompt should include who you are (e.g., Assistant Professor), what you are applying for (e.g., American Heart Association Career Development Award), the specific task (e.g., give feedback on how well the text aligns with the American Heart Association mission), and any relevant context (e.g., the American Heart Association mission). Give additional instructions to further refine its feedback. Keep in mind that although AI can be a valuable tool, we still recommend seeking feedback from human reviewers. Additionally, you can ask AI to mimic specific review criteria from funding agencies, such as significance, innovation, or approach.
AI can roleplay as a grant reviewer, which can be especially helpful for those with limited resources or time.
access to dedicated grant writing support. This could especially benefit non-English preferred writers who face additional challenges with language barriers. Research shows that non-native English speakers often invest significantly more time and effort in conducting scientific activities than native English speakers, which can impede their career advancement (Armano et. al., 2023; Berdejo-Espinola & Amano, 2023). By leveraging AI, we can help level the playing field and enhance participation for all researchers.
As AI continues to improve, we invite our audience to reflect on the pain points and opportunities within their work and explore how they could incorporate AI to help improve efficiency and decrease administrative burden. We have recently published a paper (Seckel, Stephens & Rodrigues, 2024) and GitHub repository to collate and curate resources on this topic and we invite you to browse and contribute as you work on your next grant submission. We also published a handout highlighting all the key points for using AI in grant submissions (Stephens, Seckel & Bharat, 2025), which can be shared as a resource with grant writers and research administrators.
The best way to learn how to use AI for grant submissions is through exploration and experimentation. Generative AI is here to stay, and the sooner we get acquainted with its advantages and disadvantages, the faster we can unlock and harness its potential for research administration. We encourage research offices to begin piloting AI tools in safe, guided ways, and to share lessons learned with the broader community. N
References
Amano, T., Ramírez-Castañeda, V., Berdejo-Espinola, V., Borokini, I., Chowdhury, S., Golivets, M., González-Trujillo, J.D., Montaño-Centellas, F., Paudel, K, White, R.L., & Veríssimo, D. (2023). The manifold costs of being a non-native English speaker in science. PLOS Biology, 21(7), e3002184.
Berdejo-Espinola, V., & Amano, T. (2023). AI tools can improve equity in science. Science, 379(6636), 991.
Quitadamo I.J., & Kurtz M.J. Learning to improve: Using writing to increase critical thinking performance in general education biology. CBELife Sciences Educatio 2007; 6(2):140–154. https://doi.org/10.1187/cbe.06-11-0203 PMID: 17548876
Responsibly Use AI to Develop More Competitive Grants. Version 1. Stanford Digital Repository. Available at https://purl.stanford.edu/vn273fj8262/version/1. https://doi. org/10.25740/vn273fj8262
Seckel, E., Stephens, B. Y., & Rodriguez, F. (2024). Ten simple rules to leverage large language models for getting grants. PLOS Computational Biology, 20(3), e1011863.
Acknowledgements: The authors are grateful to Carolyn Trietsch, Carlos Perez, Jennifer Nguyen, and Sarah Yeats Patrick for their invaluable feedback.

Brandi Stephens, PhD, is a Research Development Strategist for Stanford’s Division of Cardiovascular Medicine and a trained physiologist in biomedical research. She combines her love for science with her passion for grant writing to support clinical fellows, postdocs, and junior faculty in developing competitive grant applications and maximizing funding success. Brandi can be reached at bstep@stanford.edu.

Vinita Bharat, PhD, is the Assistant Director of Science Communication & Research Development Training at Stanford University. Leveraging her 14+ years of research training, she provides personalized grantsmanship advice, leads grant writing bootcamps, and supports early-career researchers, empowering them to design their grant journeys for research excellence and enjoy the process. Vinita can be reached at vbharat@stanford.edu.
5 Use AI to level the playing field. AI holds significant promise in democratizing the grant writing and development process by providing accessible, low cost (or free) grant writing aids to applicants without

Liz Seckel is a Director of Strategic Research Development at Stanford, where she provides individualized grantsmanship advice to all tiers of trainees and faculty. She has helped raise nearly $100 million to advance health equity and received several awards for her scientific work as well as her commitment to philanthropy. Liz can be reached at eseckel@stanford.edu.
National Council of University Research Administrators



•
•Collaborate: NCURA’s Professional Networking Platform, including discussion boards and libraries sorted by topical area
•Automatic membership in your geographic region
•NCURA’s Resource Center: your source for the best of NCURA’s Magazine, Journal, YouTube Tuesday and Podcast resources, segmented into 8 topical areas
•NCURA Magazine: new issues published six times per year, and all past issues available online
•Sample Policies & Procedures
•NCURA Magazine’s e-Xtra and YouTube Tuesday delivered to your inbox each week
•
•Special member pricing on all education and products
•Podcasts and session recordings from our national conferences
•Access to and free postings to NCURA’s Career Center
•Leadership and Volunteer Opportunities
•





Diane Hillebrand, Assistant Director, Research & Sponsored Program Development, University of North Dakota, has been elected to the position of Vice President/President-Elect of NCURA. A dedicated NCURA member for more than 25 years, Diane currently serves as NCURA Secretary and as a faculty member for the Departmental Research Administration workshop. Her service includes roles on the Board of Directors, Chair and member of the Professional Development Committee, Co-Chair of the Pre-Award Research Administration Conference, and Region IV Chair. Diane’s contributions have been recognized with NCURA’s Julia Jacobsen Distinguished Service Award and the Region IV Distinguished Service Award. She is also a graduate of NCURA’s Executive Leadership Program. Upon her election, Diane shared: “I am truly honored and grateful to give back to NCURA by serving in this role. I am deeply committed to membership engagement, believe strongly in the research administration profession and am excited to collaborate with all stakeholders. I wholeheartedly support the future of NCURA and LOVE supporting research…together with my NCURA family. #ResearchImprovesYourLife.”
Tricia Callahan, Interim Director of Research Training, Office of Research Administration, Emory University, has been elected to the position of Secretary of NCURA. As an involved NCURA member for more than 25 years, she has served on the Board of Directors, as Chair of the Select Committee on Peer Programs/Peer Review, as Co-Chair of the Pre-Award Research Administration Conference, and as a member of both the Nominating & Leadership Development Committee and Professional Development Committee. Tricia currently serves as a faculty member for the Level I: Fundamentals of Sponsored Projects Administration workshop and as an NCURA Peer Reviewer. Tricia has been honored as an NCURA Distinguished Educator and a recipient of NCURA’s Julia Jacobsen Distinguished Service Award. She is also a graduate of NCURA’s Executive Leadership Program. Upon her election, Tricia shared: “I am excited and honored to have been elected as NCURA’s next Secretary. I look forward to working closely with my fellow officers, the dedicated staff, and the entire NCURA community to advance our shared mission of promoting excellence in research administration.”
Eva Björndal, Director of Research Award Administration (Pre- and Post-Award), King’s College London, has been elected to the position of Treasurer-Elect of NCURA. Throughout her 15 years of NCURA membership and service, Eva has served on the Board of Directors, as Chair of Region VIII, and Co-Chair of the Annual Meeting. Eva has served on several committees, including the Professional Development Committee, Nominating & Leadership Development Committee, and the Select Committee on Global Affairs. Eva has also received the NCURA Distinguished Educator designation and is a graduate of NCURA’s Executive Leadership Program. On being elected to this position, Eva shared: “It is an honor to be elected as the Treasurer-Elect of NCURA. I really look forward contributing to NCURA’s continued success and serving the membership in alignment with the organization’s mission and values.”
Tanya Blackwell, Grants and Contracts Supervisor, School of Medicine, Seattle Children’s Research Institute, has been elected to the position of At-Large Board Member. Tanya has spent her 12 years of membership in various roles of service, including as Chair of the Select Committee of Diversity, Equity, and Inclusion, as Chair of the Education Scholarship Fund Select Committee, and a member of the Professional Development Committee. Tanya has also served on the program committees for the Annual Meeting, Financial Research Administration Conference, and Pre-Award Research Administration Conference. She is also a graduate of NCURA’s Executive Leadership Program. On being elected to this position, Tanya shared: “Being elected to NCURA’s Board of Directors is an honor and responsibility that I do not take lightly, as I have witnessed first-hand how impactful these roles are to the future of our organization. Working alongside the other Board members, I look forward to serving NCURA’s membership while upholding our mission, core values, and commitment to diversity, equity, and inclusion. While these are uncertain times for us all, I am certain of our collective ability to advance the profession of research administration.”
Ashley Stahle, Associate Director, Sponsored Programs, Office of Sponsored Programs, Colorado State University, has been elected to the position of At-Large Board Member. As a member since 2015, Ashley has served on the Board of Directors, as a member of the Professional Development Committee, and as Chair of Region VII. Ashley has served on the program committees for the Pre-Award Research Administration Conference and the Financial Research Administration Conference. Ashley is currently serving as the Co-Chair for the 2026 Financial Research Administration Conference. On being elected to this position, Ashley shared: “I’m truly honored and excited to have been elected to serve on the NCURA Board of Directors. Returning to the Board is a meaningful opportunity to continue supporting the incredible work NCURA does for our profession. I’m grateful for the trust and support of my colleagues and all who voted—thank you! I look forward to collaborating with fellow Board members, NCURA staff, and our amazing community to advance the mission of this organization and the field of research administration.”
Hillebrand will take office January 1, 2026, for one year after which she will succeed to a one-year term as President of NCURA. Callahan will take office on January 1, 2026, and will serve a two-year term. Björndal will become Treasurer-Elect on January 1, 2026, and will serve for one year after which she will succeed to a two-year term as Treasurer. Both Blackwell and Stahle will begin serving January 1, 2026, for a two-year term.
By Robert Pilgrim and Dustin Schwartz

When NCURA held its 1st Annual AI Symposium in August 2023, most attendees had heard of ChatGPT, but few had used it. In just 18 months, generative AI has transformed from a curiosity to a critical asset. In another 18 months, it could become a standard part of every workflow, as familiar and necessary as Microsoft Word or email.
Generative AI can rapidly process vast amounts of content, identifying connections across documents. This means a staff member with basic knowledge of research administration and a well-trained AI could outperform peers who do not use AI. For research administrators, some immediate benefits include:
• Responding to repetitive faculty questions
• Reducing the time spent interpreting sponsor guidelines
• Creating and managing internal knowledge bases
• Assisting with report drafting and compliance summaries
A knowledge worker is someone whose primary job involves the creation, processing, and application of information, rather than physical labor. Faculty members, research administrators, analysts, and compliance officers fall into this category. Over the past few decades, tools such as the internet, email, and Google Search have become indispensable to knowledge workers. Today, a new tool is entering this essential toolkit: generative AI.
Just as we cannot imagine functioning without web search, those who embrace generative AI will soon feel the same way about their AI assistants. These tools enable knowledge workers to efficiently digest information across hundreds of documents in seconds, identify key points, suggest actions, and automate repeatable tasks. More importantly, these tools can elevate the contributions of junior or less experienced staff, offering them real-time access to expert-level guidance.
documents, “chat” to these documents, and receive insights. You can build context-specific tools to help:
• Answer guidelines, policies, and procedure questions
• Guide researchers in preparing bios
• Review and improve draft proposals
• Summarize compliance requirements
• Answer common questions from departments or units
These solutions can be made more effective by uploading sponsor guidelines such as the NSF Proposals & Award Policies & Procedures guidelines (PAPPG), which AI can reference for detailed responses.
In April 2025, a leaked internal memo from Shopify, a $129 billion global company, included details that may be a signal of things to come. It made one thing unmistakably clear: artificial intelligence is not optional. According to the memo, AI adoption is now a baseline expectation for every employee at every level, without exception (Louise, 2025). Some of the key points in the memo could be a powerful signal of what’s coming for the rest of us:
• AI is part of every workflow
• AI integration will be evaluated during annual reviews
• Before requesting resources, teams must justify why AI can’t meet their needs

Generative AI enables interaction through plain English. Platforms such as ChatGPT, Claude, Copilot, and Perplexity allow users to upload Figure 1 Academia is typically slower to adapt than industry to new technology
Institutions must adopt AI now to remain competitive and resource-efficient. While private sector organizations resemble agile speedboats, universities are more like oil tankers, difficult to steer but incredibly powerful once moving in the right direction. By empowering frontline administrators with AI, institutions can begin the long but necessary shift toward operational agility.
A Common Use Case
The Problem
Example: Proposal submissions are increasing, especially from junior faculty and interdisciplinary collaborations. Many of these users are unfamiliar with proposal procedures and submission systems. Staff are inundated with redundant questions and often refer to lengthy PDFs or SharePoint pages that few people read.
The Solution

Setting Up and Deploying Your Chatbot
Use a platform such as Perplexity to get started. It doesn’t require an account to test or share bots so you can get up and running in minutes.
A chatbot trained on institutional workflows and public documents can answer common questions around proposals, submissions, and compliance. It can offer links to the correct templates, point out which training is required, and even quiz users on whether their project is eligible for submission under a particular program.

Training Your AI Assistant
Think of AI as a new hire. They may be technically skilled but need training in your institution’s specifics. Build a knowledge base using:
• Public-facing documents. Looking over your website is a good place to start.
• Internal procedures (if safe to share)
• Living Q&A documents added by staff
Maintain a central, editable file (e.g., Microsoft Word, Google Docs) to track new questions and answers. Update the chatbot weekly using this file. Assign someone to curate content and ensure consistency. This ensures the bot reflects new practices and avoids confusion from outdated answers.
Basic setup steps:
1. Title & Description: Choose a meaningful name and purpose.
2. Custom Instructions (Prompt Engineering): Specify the bot’s tone, user level, and what to do when unsure.
3. Upload Files: Documents such as policy guides.
4. Add Public Links: Include institutional and sponsor websites that are open access.
Example Custom Instructions: “You are an expert Research Administrator at an R1 institution. You are responsible for helping faculty navigate complex proposal processes, ensuring compliance with federal guidelines (e.g., NSF PAPPG), and improving proposal quality across your institution. Your tone should be clear, professional, and supportive. If you don’t know the answer, suggest a next step or resource. Prioritize concise, policy-aligned answers and clarify ambiguous questions through follow-up prompts. Assume your users are new faculty with limited grant writing experience. Use the supplied files and links to inform your responses. If relevant information can be found in those files, provide a summary and guide users to the appropriate section or document.”
Start small. Target junior faculty. They ask predictable, high-impact questions. As the bot gains value, expand its scope.
Imagine a scenario where:
• AI scans recent publications/proposals and awards and aligns faculty interests with new funding solicitations
• A second AI drafts proposal templates tailored to those opportunities
• A third AI evaluates the drafts using sponsor-specific rubrics, suggesting improvements
• The process iterates until a strong draft is ready for human review
All of this can happen in minutes, not weeks. The implications are massive: greater speed, increased quality, and more time for strategic work by human staff.
It may take many years of training before we have a knowledgeable, competent human research administrator. When this person leaves or retires, the process must begin again, and knowledge must constantly be updated as new regulations and guidelines are released. Unlike human knowledge, which cannot be transferred directly between people, a digital AI solution can be copied, pasted, modified, and merged in seconds for virtually zero cost.
This opens many opportunities for those in research administration to become significantly more efficient, effective, and productive, something

that would have been difficult or even impossible just 24 months ago without writing a single line of code. By using publicly available data and a few straightforward steps, you can create a helpful digital assistant tailored to your team’s specific needs with no coding required. Don’t get left behind, you can start building AI solutions for free today with public data! N
Reference Louise, N. (2025, April 8). Shopify CEO Tobi Lütke confirms leaked internal memo on social media about “hiring AI before humans” Tech Startups. https://techstartups. com/2025/04/07/shopify-ceo-tobi-lutke-confirms-leaked-internal-memo-on-social-media-about-hiring-ai-before-humans/


Robert Pilgrim, PhD, is the Associate Director for Data Strategy & Insights, Division of Research & Innovation, University of Arkansas. University of Arkansas. Robert helps support the research analytics projects on campus and has a keen interest in the emerging field of AI and its application to research administration. He can be reached at rpilgrim@uark.edu.
Dustin Schwartz is the Electronic Research Administration Manager, Division of Research & Innovation, University of Arkansas. Dustin specializes in leveraging technology to solve problems and recently helped lead the implementation of an AI powered faculty-solicitation matching platform for our researchers. He can be reached at djschwar@uark.edu.
Standard, Expedited, or Exempt: Can ChatGPT Determine IRB Review Category?
Emmett Lombard, Gannon University
Exploring Marginality, Isolation, and Perceived Mattering Among Research Administrators
Denis Schulz, California State University San Marcos, Karen Gaudreault, University of New Mexico, and Ruby Lynch-Arroyo, University of Texas at El Paso
A Case Study of Research Administrator Perceptions of Job Satisfaction in a Central Research Administration Unit at a Private University
Noelle Strom, University of Denver
Research Security and the Cost of Compliance: Phase I Report
Council on Government Relations
BOOK REVIEW: The SAGE Handbook for Research Management, 1st ed.
Chloe Brown, Texas State University
BOOK REVIEW: The Mentor’s Guide: Five Steps to Build a Successful Mentor Program, 2nd ed.
Clinton Patterson, Texas A&M University

The Research Management Review invites authors to submit article proposals. The online journal publishes a wide variety of scholarly articles intended to advance the profession of research administration. Authors can submit manuscripts on diverse topics.
www.ncura.edu/Publications/ResearchManagementReview.aspx
Research compliance and funding management are critical, but complex. Artificial intelligence can ease the burden and enhance institutional outcomes.
• Complex administrative processes
• Labor-intensive documentation
• Time-consuming risk detection and recordkeeping
• High-effort funding and collaborator identification
• Inefficient award portfolio management
• Data-heavy tasks like reporting and setup
• Smart summaries: Provides tailored guidance based on policies or project terms
• Proactive monitoring: Flags risk and potential violations
• Auto-filled forms: Uses your systems to complete submissions faster
• Collaboration matching: Connects researchers with shared goals
• Proposal tools: Recommends funding, compiles award info
• Workflow automation: Accelerates award setup and reporting
Let AI power your research compliance and operations.
Connect with our team to learn more.
go.hcg.com/research-enterprise
In June, the NCURA Board of Directors convened virtually for our regularly scheduled board meeting. President Denise Moody officially called the meeting to order and opened with thoughtful welcoming remarks. She encouraged all members to consider attending the upcoming 3rd Annual AI Symposium, a key initiative chaired by Nancy Lewis, Lori Schultz, and Thomas Spencer. Denise expressed her sincere appreciation for the Board members, recognizing their ongoing dedication and service to NCURA, particularly in light of the ongoing pressures and challenges affecting our profession and institutions.
Denise provided an update on the upcoming NCURA Regional Leadership Conference, which will take place in September. In preparation for the event, leaders from regional groups as well as national committees were asked to reflect on and respond to a series of strategic questions. These questions closely align with those posed by past President Kris Monahan during her term, continuing a thoughtful and intentional dialogue about the future direction and priorities of the organization.
The Board recognized and extended gratitude to Dr. Andrea Hollingsworth, keynote speaker at the 2025 PRA Conference, whose donation of her speaker fees inspired the creation of the NCURA Bridge Fund. Kris Monahan played a key role in drafting the initial vision for the fund, and thanks to the swift efforts of NCURA staff, the initiative has been brought to life and is already making a positive impact.
In addition to regular committee reports, a number of important action items were presented by several select and standing committees for Board discussion and approval. The following items were reviewed and approved:
■ Nominating and Leadership Development Standing Committee – Three Distinguished Educator nominations
■ Professional Development Committee – Six Traveling Workshop Faculty nominations and the nomination of a Co-Editor for NCURA Magazine
■ Select Committee on Global Affairs – Nominations for six Global Fellowships

Vice President Shannon Sutton provided an update on the 67th Annual Meeting (AM67). Shannon shared details highlighting the wide range of opportunities for professional networking, high-quality educational sessions, and member engagement. Alongside her co-chairs, Candice Ferguson and Kathy Durben, Shannon and the full program committee worked diligently to deliver an exceptional conference experience for all attendees.
As students return to campus and the Fall semester begins, I want to take a moment to extend my warmest wishes to our entire NCURA community. May this new academic year be filled with growth, connection, and success for you and your teams.

If you have any questions, comments, or concerns, please feel free to reach out to any member of the Board of Directors or NCURA staff.

Kay Gilstrap, CRA, is the NCURA Treasurer and serves as Director for Research Internal Controls and Policy Development at Georgia State University. She can be reached at kgilstrap@gsu.edu.
BOARD OF DIRECTORS
President
Denise Moody
Lundquist Institute for Biomedical Innovation
Vice President
Shannon Sutton
Georgia State University
Immediate Past President
Kris Monahan
Harvard University
Treasurer
Kay Gilstrap
Georgia State University
Secretary
Diane Hillebrand
University of North Dakota
Executive Director
Kathleen M. Larmett
National Council of University Research Administrators
Eva Björndal
King’s College London
Natalie Buys University of Colorado
Anshutz Medical Campus
Jennifer Cory
Stanford University
Jill Frankenfield
University of Maryland, Baltimore
Katy Gathron
MD Anderson Cancer Center
Melanie Hebl
University of Wisconsin-Madison
Katherine Kissmann
Texas A&M University
Rosemary Madnick
Lundquist Institute for Biomedical Innovation
Danielle McElwain
University of South Carolina
Nicole Nichols
Washington University in St. Louis
Scott Niles
Georgia Institute of Technology
Lamar Oglesby
Rutgers University
Geraldine Pierre
Boston Children’s Hospital
Lori Ann Schultz
Colorado State University
Thomas Spencer
University of Texas Rio Grande Valley
By Kathleen Halley-Octa and Kari Woodrum
In the ever-evolving landscape of research administration, artificial intelligence (AI) presents both challenges and opportunities. As new research administrators, your fresh perspective, digital fluency, and adaptability uniquely position you to help shape how we integrate these technologies into our daily work.
Understanding the AI Landscape
AI is no longer confined to science fiction or specialized computer labs. Today, AI tools appear in our inboxes, spreadsheets, budgeting tools, and compliance platforms. At its core, AI refers to systems trained to perform tasks that traditionally require human input, such as interpreting language, detecting anomalies, or generating reports. Machine learning and natural language processing allow these systems to learn from patterns and interact more naturally with users. The emergence of generative AI, such as ChatGPT or DALL-E, adds the ability to produce original text, images, and even financial forecasts. For instance, some research administrators are using these tools to help draft budget justifications, generate proposal summaries, or brainstorm outreach language for broader impacts.
But not all AI outputs are created equal. It is important to utilize critical analysis to recognize results that miss nuances or context that human interaction provides.
The Critical Role of Human Oversight AI can process massive amounts of data and identify patterns, but it can’t always understand context or anticipate ethical implications. For instance, an AI tool might flag a travel expense as unallowable when in reality, it’s a grant-approved expenditure tied closely to the scope of work approved by the sponsor. Without human review, these tools can cause confusion or frustration and lead to incorrect reporting.
This is why critical thinking remains central to the research administrator’s role. Your ability to ask questions, interpret results, and apply institutional knowledge makes AI effective, not vice versa.
AI is already being used across several areas of financial research administration to streamline tasks and support decision-making. While these tools are not magic solutions, they offer real efficiencies when paired with critical
human oversight. Below are some specific, real-world applications where AI is already making a difference:
1. Expense Auditing and Anomaly Detection: Tools like AppZen use AI to audit expense reports automatically. These systems scan receipts, flag potential policy violations, and detect patterns that may signal errors or fraud.
2. Monitoring Spending Trends and Forecasting: Platforms such as Anodot apply machine learning to financial data to identify outliers or unexpected shifts in spending behavior. While these tools don’t replace budgeting expertise, they can highlight areas that merit further reviews—such as a sudden increase in subaward payments or travel costs.
3. Budgeting and Financial Planning: Enterprise systems like Workday Adaptive Planning incorporate AI capabilities for financial forecasting and scenario modeling. This helps post-award teams assess how changes in funding levels or spending rates could impact the overall budget without manually recalculating spreadsheets.
4. Travel and Expense Automation: Products such as SAP Concur use AI to streamline travel and expense management. The system can read and categorize receipts using optical character recognition (OCR), match them to trips, and flag duplicate or out-of-policy charges.
5. Spend Analysis and Procurement Optimization: Coupa is another real-world tool that leverages AI to analyze institutional purchasing data. It can identify vendors with high error rates, suggest more cost-effective options, and monitor invoice processing trends for signs of inefficiency or risk.
While these tools can automate repetitive processes and surface insights faster than a manual review, they aren’t perfect. Anomalies flagged by AI still require human interpretation to determine if action is needed.
As a research administrator, your role is to bring context to the machine’s output, especially when understanding grant-specific stipulations or institutional policies. These tools don’t remove responsibility; they redistribute it. You’re still the expert—AI helps you move faster and focus your expertise where it matters most.
AI’s Limitations: Know What to Watch For It’s equally important to understand what AI can’t do—at least not reliably. Current AI systems are prone to a few well-known issues:
• Hallucinations – AI sometimes invents facts that sound plausible but aren’t true, such as citing a grant policy that doesn’t exist.
• Randomness – The same prompt can yield different outputs depending on how the system interprets it, meaning you ask an AI tool to draft a cost-share justification and receive different versions each time that may contain conflicting information.
• Bias – AI learns from the data it’s trained on. If that data is biased, so are the results. For example, an AI tool may suggest preferred vendors for lab supplies but consistently deprioritizes minorityowned businesses, reflecting biased training data from past procurement trends.
• Struggles with Negatives – AI often misinterprets statements with “not” or “except.” Meaning, if you prompt AI to list expenses not allowed under participant support costs, it may return a list of allowed expenses instead, leading to confusion and possible noncompliance if not caught.
• Lack of Context – It can’t always tell when something that looks incorrect is valid in a particular situation. For instance, AI flags a late subcontract payment as an error, unaware that it was intentionally delayed due to a pending IRB approval, something a human would recognize as a valid exception.
This is where your analytical skills come in. Every AI-generated output should be reviewed critically—ideally with a process that includes validation, ethical review, and a clear understanding of institutional guidelines. Think of AI as a first draft, not a final answer—your expertise ensures accuracy, context, and alignment with sponsor and institutional expectations.
How You Can Help Your Team Use AI Wisely
As someone early in your research administration career, you may already feel more comfortable with new technologies than some of your colleagues. That’s a strength you can bring to your organization. Here are a few ways to be a change leader:
• Model Curiosity, Not Certainty: Show that you’re experimenting thoughtfully with AI tools. Share both successes and things that didn’t work.
• Host Informal Demos: Offer your team short, informal “AI 101” sessions. Demonstrate how you used a tool to generate a report, visualize data, or double-check a budget narrative.
• Promote Ethical Use: Talk openly about AI limitations and privacy concerns. Remind colleagues to avoid feeding tools sensitive or proprietary data and never rely on AI alone for compliance tasks.
• Be Aware of Institutional Restrictions: Many universities have implemented policies that restrict or prohibit the use of generative AI tools with certain types of data—especially sensitive, confidential, or proprietary grant information. These limitations are often driven by concerns around data security, privacy, and vendor compliance. Before using AI tools in your workflow, check your institution’s guidelines and approved tools list to ensure you’re in compliance.
• Encourage Prompt Engineering: Share tips for writing better prompts: be specific, provide context, and ask follow-up questions. Show how small changes in phrasing can improve AI results.
• Start with Small Wins: Look for low-stakes ways to test AI in your day-to-day work—such as summarizing meeting notes or refining a budget narrative. Small successes can build your confidence and spark ideas to share with others.
One helpful way to think about AI is to treat it like an intern: eager, fast, and surprisingly capable—but still in need of guidance and oversight. The more clearly you explain what you need and provide feedback, the better the results. Framing it this way can help you use AI more effectively and build confidence as you explore its potential.
Ethical Questions Every Research Administrator Should Ask Ethics have always been foundational to research administration—from ensuring compliance to safeguarding integrity in proposal review and financial stewardship. As we integrate AI into our workflows, these responsibilities don’t go away—in fact, they become even more critical. AI introduces new questions about data use, accountability, and transparency that require the same level of thoughtfulness and scrutiny we already apply to sponsor regulations and institutional policies.
As we bring AI further into our daily work, we also have to ask the tough questions:
• Is the data we’re using fair and complete?
• Who is accountable when an AI system makes a mistake?
• Are we transparent with researchers, sponsors, and leadership about how we’re using AI?
• How do we ensure confidentiality, especially when tools process sensitive grant data?
These questions won’t always have simple answers, but asking them keeps ethics at the forefront. In a field where trust, compliance, and integrity are essential, introducing AI means those principles matter even more. By staying curious and cautious, you can help ensure that AI enhances—not compromises—the ethical standards research administration is built on.
The future of research administration is likely to include deeper AI integration—predictive financial modeling, automated reporting, and even AI-assisted peer review. As a new research administrator, you are in a unique position. You can help your organization explore AI in thoughtful, practical ways while fostering a culture of critical engagement. Whether you’re reconciling budgets, reviewing subawards, or drafting compliance language, AI can be a powerful co-pilot—if you remain in the driver’s seat. So, explore. Question. Share what you learn.
The tools will continue to evolve—and so will your skills. Staying informed, asking good questions, and engaging in open dialogue with colleagues will help you use AI wisely and confidently. You don’t have to have all the answers to lead the way—just a willingness to explore, learn, and share. The crossroads of AI and critical thinking is where the future of research administration begins—and you’re already standing at the intersection. N


Kathleen Halley-Octa is a Manager with Attain Partners’ Research practice. She has 15 years of experience in research administration, with expertise in pre-award operations, training development, and change management. Kathleen focuses on eRA system implementations, organizational assessments, and business process improvement. She can be reached at kmhalleyocta@attainpartners.com.
Kari Woodrum is a Grants and Contracts Assistant with the Prairie Research Institute at the University of Illinois, Urbana-Champaign. In this role, she supports researchers throughout the full life cycle of grants and contracts, including budget development, proposal preparation and submission, financial monitoring, and compliance with sponsor and institutional requirements. She can be reached at kwoodrum@illinois.edu.

This handy reference includes the regulatory updates issued April 2024, OMB FAQS and Key Revisions issued January 2025. Bulk Pricing Available. Order Yours!
By David Richardson and David Smelser
One cannot ignore the words “artificial intelligence,” for they are everywhere, from the barrage of news stories and advertisements to promises and wild rumors. AI is increasingly becoming a part of our day-to-day lives at home and at the office. In a recent office-wide survey on the use of AI at the University of Illinois at Urbana-Champaign Sponsored Programs Administration, nearly all respondents indicated some daily use of AI on editing, drafting emails, researching information, and idea generation. With the inevitability that AI will become an integral part of your work, it is best to be knowledgeable about it so you can better understand its capabilities, limitations, and how
it will influence your job. While we cannot predict the future, we agree that the future of research administration involves artificial intelligence. We believe the advances in artificial intelligence will permit it to autonomously perform work that is rote, allowing us time to focus on work that is more meaningful, with more substantive contributions to the research enterprise. As an example, take the task of writing an article for NCURA Magazine. Everyone knows it is easier to respond to someone else’s draft than it is to start from scratch. We provided the following prompt (words in italics below) to Microsoft Copilot to generate a draft.
Write a 2.5 page article on Research Administration and how the use of Artificial Intelligence could be useful in transforming the field of Research Administration, creating new efficiencies, and supporting data-driven decision making.
Transforming Research Administration with Artificial Intelligence: Creating New Efficiencies and Supporting Data-Driven Decision Making
Research administration is a critical component of the research ecosystem, encompassing the management of research projects, funding, compliance, and reporting. As the volume and complexity of research activities continue to grow, the need for efficient and effective research administration has become more pronounced. Artificial Intelligence (AI) offers promising solutions to transform the field of research administration, creating new efficiencies and supporting data-driven decision making.
Research administration involves a wide range of activities, including grant management, compliance monitoring, financial oversight, and reporting. Administrators are responsible for ensuring that research projects are conducted in accordance with regulatory requirements, institutional policies, and funding agency guidelines. They also play a crucial role in managing budgets, tracking expenditures, and ensuring that research outcomes are accurately reported.
Given the multifaceted nature of research administration, administrators often face challenges related to data management, compliance, and

resource allocation. Traditional methods of managing these tasks can be time-consuming and prone to errors, leading to inefficiencies and potential compliance issues. This is where AI can make a significant impact.
AI technologies, such as machine learning, natural language processing, and robotic process automation, have the potential to revolutionize research administration. By automating routine tasks, enhancing data analysis, and providing predictive insights, AI can help administrators streamline their workflows and make more informed decisions.
One of the most immediate benefits of AI in research administration is the automation of routine tasks. For example, AI-powered tools can automate the process of grant application review, reducing the time and effort required to evaluate proposals. These tools can analyze large volumes of text, identify key information, and flag potential issues, allowing administrators to focus on more strategic activities.
Similarly, AI can automate compliance monitoring by continuously scanning research activities for potential violations of regulatory requirements. This can help institutions stay ahead of compliance issues and reduce the risk of penalties.
Research administration involves the management of vast amounts of data, from financial records to research outputs. AI can enhance data analysis by identifying patterns and trends that may not be immediately apparent to human administrators. For instance, machine learning algorithms can analyze historical funding data to identify factors that contribute to successful grant applications. This information can be used to guide future funding strategies and improve the chances of securing grants.
AI can also support data-driven decision making by providing real-time insights into research performance. For example, AI-powered dashboards can visualize key performance indicators, such as publication rates, citation counts, and funding levels, allowing administrators to monitor progress and make informed decisions about resource allocation.
Various AI models have made significant improvements in the last few years, however in its current state AI is not ready to assume full control over the entire sponsored project lifecycle. While Copilot produced a realistic draft, it is not perfect and should be validated by subject matter experts (SMEs) to ensure AI’s end product is accurate and reasonable. As SMEs, we take the following exceptions to Copilot’s draft:
1. AI’s predictive capabilities are subject to accurate and relevant data, especially if the data needed to forecast future actions is contained in a local or closed environment. However, not even the best AI predictive models would have predicted what has happened in research administration over the past six months. Some AI models may see things faster than humans, but humans can see things AI can’t.
2. We liked how it mentioned institutions willing to invest in AI, but we think getting institutions to implement AI remains a major challenge. For example, the University of Tennessee seems to have a willingness with a dedicated Associate Vice Chancellor and Director of the AI Tennessee Initiative, but it will take time and visionary directives to push AI adoption across the campus’ research administration community.
3. We advocate for you to invest your time, dollars, and energy in creating an AI environment that is supplemental and supportive of
One of the most powerful applications of AI in research administration is its ability to provide predictive insights. By analyzing historical data and identifying patterns, AI can predict future trends and outcomes. For example, AI can forecast funding availability based on historical trends and current economic conditions, helping institutions plan their research budgets more effectively.
AI can also predict potential compliance issues by analyzing patterns of past violations. This can help administrators proactively address compliance risks and implement preventive measures.
While the potential benefits of AI in research administration are significant, there are also challenges and considerations to keep in mind. One of the primary challenges is the need for high-quality data. AI algorithms rely on large volumes of accurate and relevant data to generate meaningful insights. Institutions must invest in robust data management practices to ensure that their data is reliable and up-to-date.
Another consideration is the ethical use of AI. Administrators must ensure that AI tools are used in a transparent and accountable manner, with appropriate safeguards to protect privacy and prevent bias. This includes regularly auditing AI algorithms to ensure that they are functioning as intended and do not perpetuate existing inequalities.
AI has the potential to transform research administration by automating routine tasks, enhancing data analysis, and providing predictive insights. By leveraging AI technologies, research administrators can create new efficiencies, support data-driven decision making, and ultimately improve the management of research activities. However, it is essential to address the challenges and considerations associated with AI implementation to ensure that its benefits are realized in a responsible and ethical manner.
As the field of research administration continues to evolve, AI will undoubtedly play an increasingly important role in shaping its future. Institutions that embrace AI technologies and invest in the necessary infrastructure and training will be well-positioned to navigate the complexities of modern research administration and achieve their research goals.
existing workflows and software. We encourage NCURA to continue to advocate for AI-focused sessions and specialized forums that showcase success stories across the sponsored lifecycle.
We look forward to the incorporation of AI-driven tools to improve our research administration operations, and we look forward to recreating this article as the evolution of AI changes our profession for the better! N


Dave Richardson is the Executive Associate Vice Chancellor for Research and Innovation at the University of Illinois, Urbana-Champaign. He can be reached at daverich@illinois.edu.
David Smelser is the Senior Director of Innovation & Optimization at the University of Tennessee, Knoxville. He can be reached at dsmelser@utk.edu.


Streamline the entire research lifecycle with fully integrated, user-friendly software






Trusted


The research management ecosystem is complex and diverse with multiple stakeholders supporting the research infrastructure.
NCURA’s comprehensive resource can help to support your team and partner offices across your institution. It covers the full range of issues impacting the grant lifecycle with more than 20 chapters including:
•Research Compliance
•Subawards
• Audits
•Export Controls
•Administering Contracts
•Sponsored Research Operations Assessment
•Pre-Award Administration
•Intellectual Property & Data Rights
• F&A Costs
•Regulatory Environment
•Communications
•Organizational Models
•Post-Award Administration
•Special Issues for Academic Medical Centers
•Special Issues for Predominantly Undergraduate Institutions (PUIs)
•Training & Education
•Staff and Leadership Development
•1100 index references
•40 articles added over the past year
• 21 chapters
•Updated 4 times a year
•1 low price

With today’sremote comprehensivenvironmentworkthis e PDF resource can be shared with institutionyour colleagues.
Recent Articles Include:
•Evaluating the Impact of Internal Submission Deadline Policy on Grant Proposal Success
•Preparing for and Surviving an Audit: Helpful Tips
•When Did You Say Your Proposal Was Due? Working on Short Deadlines
•Defining and Documenting Financial Compliance for Complex Costs
•Troublesome Clauses: What to Look for and How to Resolve Them
•Rewards of a Two-Part Subrecipient Risk Assessment
•Proposal Resubmission: Overcoming Rejection
• Mitigating Audit Risk at Small Institutions
•Fundamentals of Federal Contract Negotiation
•Managing Foreign Subawards from Proposal to Closeout
•How to Reduce Administrative Burden in Effort Reporting
•Strategies for Increasing Indirect Cost Recovery with NonFederal Sponsors
•A Guide to Industry-University Cooperative Research Centers (IUCRCs)
• Building Research Administration Community Through Service
•Turning Off Turnover - The Use of a Progression Plan to Attract and Retain Employees
By Lori Ann Schultz

For the last 70 years, research administrators have been behind the scenes of every major scientific and technology development. We have worked with our researchers in support of life-saving medical interventions, missions to space, and the sequence of the human genome. We have the curiosity to understand complex problems, the courage to navigate uncertainties and changing times, and the commitment to stay focused, even when the work is complex or unseen.
In the past five years, in particular, the work of research administrators has helped researchers navigate unprecedented events, such as the COVID-19 pandemic and the current state of the federal research ecosystem. We have proven our ability to adapt and continue to provide service and support to advise researchers when everything around them is changing. We can adopt that spirit to provide guidance, model best practices, and capitalize on the growth of generative AI tools and the possibilities they represent for our work.
Research administrators can utilize generative AI tools today in several ways to streamline day-to-day operations and support research within their organizations. We can use free versions of these tools without compromising regulated data, privacy, or business-sensitive information. There are even more options to use generative AI tools when our organizations are involved in setting guidelines, operating AI tools locally, and providing governance. The following is a list of examples of things research administrators can start doing today, as well as things that require institutional or organizational buy-in to help transform the way we provide service throughout the research administration lifecycle. These examples are not a replacement for the people who work in our field. Our expertise is needed to vet the output of these tools and understand the work deeply enough to use the tools effectively. Generative AI tools are innovative resources.
Proposal Development & Submission
Proposal review checklists against a published solicitation
Polishing language on budget justifications, letters of support
Receipt of Award/ Negotiation Award summary sheet for PI/Co-PIs
Suggested language in the negotiation process
Award Management Project management timelines
Calendar for project milestones & deadlines
Compliance Draft data management plans
Identify compliance requirements by sponsor
Training & Staff Development
User aids for proposal submission systems
Training calendars for staff development
Policy drafting
The list above is not exhaustive. The people working in AI in research administration are already devising more innovative and unique applications of AI to support the research mission. It’s what we do.
Generative AI tools have the potential for us to automate routine transactional activities, allowing us to spend more time on priorities such as supporting researchers. Freeing up time will enable us to help our researchers effectively utilize AI in their work. We can work to advise our leadership on how this technology transforms our work. It also means we can provide more meaningful development and opportunities for the staff we are hiring, developing, and promoting.
Checking proposal documents against sponsor requirements
Matching proposal narrative to the Request for Proposals
Matching opportunities to relevant faculty
Automate process for PI acceptance of award terms
Contract triage & fallback positions for common “troublesome” clauses
Automated invoicing/financial reporting
Analysis of spending: burn rates, allowable costs, alert for prior approval requirements
IRB protocol triage
IACUC protocol triage
Workload analysis & benchmarking
Chatbot for body of research knowledge
The use of AI tools is simply the next challenge, one we can meet head-on, and one that doesn’t have to create crisis. Let’s meet it like we have other challenges. This is our moment.
Keep an eye on the Federal Demonstration Partnership (FDP) as they work to include AI solutions to reduce administrative burden. Please get in touch with Lori Schultz at lori.schultz@ colostate.edu if you’d like to learn more. N

Lori Ann M. Schultz is the Assistant Vice President, Research Administration at Colorado State University. She loves to write and did not use a generative AI tool to write this article. She can be reached at lori.schultz@colostate.edu.

By Lamar Oglesby
There’s a moment in the movie Hook when a Lost Boy gently studies Peter Banning, a grown-up Peter Pan who no longer remembers who he was nor the beloved abilities he once had like flying, fighting, and crowing. And while flying is certainly a fantastic skill, in the story, it’s not really about flying. It’s about wonder, and the slow, quiet erosion of joyful thinking under the weight of deadlines, regulations, structures, processes, and spreadsheets. In a world of rapid adoption of artificial intelligence (AI), how readily and easily we trade the complex for automation, how our sense of purpose begins to shrink under the pressure of dashboards, AI scripts, and efficiency metrics.
In research administration, it seems that we are now being tempted to let the machine think for us. To let algorithms solve problems we once approached with intuition, collaboration, and deep institutional memory. But this profession, as demanding as it is, was never meant to be robotic: “When AI tools take over these tasks, individuals may become less proficient in developing and applying their own problem-solving strategies,
leading to a decline in cognitive flexibility and creativity” (Gerlich, 2025). Research administration is a human craft. One that calls on our imagination to solve the unsolvable, to guide faculty through chaos, to find beauty in budgets, and meaning in compliance.
So the question isn’t just for Peter Pan. It’s for us, for you—the analysts, the specialists, the directors, the grant whisperers: Have you forgotten how to fly?
In the age of AI, forgetting cuts even deeper. As the world advances, it’s natural to lose touch with the ways we used to engage in our work. Colleagues often have relics of yesteryear’s work tools on their desk to serve as nostalgic pieces of art such as old Mac or IBM monitors or printing calculators. But this is different and it feels accelerated. We live in a world where thinking itself is being outsourced. In research administration, where complexity and nuance have always demanded both precision and creativity, we’re watching a quiet shift unfold. Tools once designed to support insight now begin to replace it.
Bots can generate answers and prompts before we reflect and wonder. The voice of the machine grows louder than the whisper of our inner curiosity. And in that space between speed and silence, many of us have unknowingly let go of something essential. Not just our imagination, but the joy of solving messy problems, the satisfaction of navigating ambiguity, the art of human judgment. AI can optimize, but it cannot wonder. What I’m suggesting, and perhaps the greatest risk in over-relying on it, isn’t the error alone but forgetting the greatest attribute of a research administrator is knowing how to fly.
“You can’t use up creativity. The more you use, the more you have.”
Creativity is not a finite resource; it’s a muscle that is developed through resistance and nurture. Like any muscle, it atrophies when unused. AI tempts us to skip the warm-up, to outsource the effort. But doing so robs us of the chance to grow. We rob ourselves of the chance to stretch, to think through the nuance, to grow sharper in our judgment and bolder in our solutions. The work may be hard, but it’s in the doing, in the creative strain of navigating a budget crisis, rewriting a compliance memo, or writing a difficult but necessary email that we build our professional strength.
AI can assist and support our profession in ways beyond human ability, but it cannot replace the instincts we refine through repetition, the insights we earn through struggle, or the creative courage it takes to lead in uncertainty. If we forget to use our muscles, we may wake up one day brilliant at automation, have productivity and efficiency measures that are impressive, but out of shape for the very work that makes this profession matter.
There was a time when creativity felt like a storm, wild, disordered, and alive. You could feel it in a room full of people brainstorming with no agenda other than the pursuit of something new. One of the most electric moments I’ve experienced recently came during a brainstorming session. Five professionals and scholars from various walks of life, equipped with a white board and dry erase markers, were challenged to devise a theoretical model for virtuality. There were no slides, no formal structure, and laptops and easy access to ChatGPT. Just an open invitation to think dangerously, collaboratively, and without filters. It reminded us what true creative energy feels like: chaotic, uncertain, frustrating, and ultimately thrilling.
But in too many spaces, the storm has gone quiet and AI is right there, always ready with an answer, always eager to help, like Rosie from The Jetsons turned into a well-meaning but overbearing helicopter parent. History, and our favorite cartoons, have reminded us that convenience comes with a cost. Why chase the muse when a machine can deliver five ideas in five seconds? Why struggle with a question when an algorithm offers instant resolution? We may think we’re being more creative, but we’re outsourcing the very struggle that gives creativity its soul, humans their ability to create, and research administrators their ability to fly.
Embrace the storm in all its glory, inconvenience, and its deeply human experience where imagination lives. Creativity demands friction. It thrives in uncertainty. You know exactly what that feels like, don’t you? When it feels impossible to meet the deadlines, answer the questions, and make it
work under the most unreasonable timeline and circumstances. But you figured it out. You always have. There’s power in that process, but it needs space to stumble, to question, to be wrong. Many researchers have argued that pressure is an effective enabler or exhibitor of creativity (Gupta, 2015). In a study conducted by Robert Eisenhower and Justin Aselage, it was stated that “performance pressure was positively related to creativity…” (Eisenhower and Aselage, 2009). We haven’t stopped creating; we are becoming too quiet about it. We’ve mistaken silence for efficiency and automation for insight. The storm is the gift. The wind is sudden and unpredictable, and the conditions may not produce visibility and comfortability. Pressure is the altitude, and flight is impossible without pressure. That’s just…physics.
This is not an ANTI-AI article. Relearning how to fly doesn’t mean abandoning technology; it means reclaiming our humanity within it. It means choosing to feel something in the act of creating. To struggle. To doubt. To surprise ourselves. As Dr. Rajiv Nag so eloquently stated to Cohort 8 during his Qualitative Inquiry Methods course, “Sure, use AI, it is here in the room with us right now, but trust yourselves, and don’t be so quick to relinquish your power over to it. Yes, [this] (learning, completing a doctorate) is hard, and it will be challenging, but you can do it. As so many others have. Trust that you can do it too!”
So next time you reach for ChatGPT or Copilot to serve as a launchpad for ideas, consider starting it yourself instead. It might not be as fast or polished, but it might be yours. It might be the first step toward remembering who you are and what you’re capable of. Flight is not a gift; it’s a memory, and memories can be rekindled. All it takes is one wild, wonderful, imperfect thought.
Let Us Not Forget
AI is not the villain. It’s a powerful and beautiful tool. I remind you that it’s a two-way mirror reflecting how magical and powerful the human mind can be, while simultaneously reflecting how much of that power we’re willing to surrender in exchange for speed and ease.
This is not a call to abandon AI. Use it! However, it is a reminder of your ability to fly and a call to balance. To pause. To question. To create with purpose and to struggle with intention. To wonder. To dream.
And so, I leave you with the same challenge Dr. Nag gave me: “Put AI down. Just for a moment. Pick up a pen, stare at the blank page, and dare to fly.” N

References
Gerlich, M. (2025). AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Societies, 15(1), 6.
Gupta, M. (2015). Impact of work pressure on creativity and innovation. Impact-ofwork-pressure-on-Creativity-and-Innovation.pdf
Eisenberger, R., & Aselage, J. (2009). Incremental effects of reward on experienced performance pressure: Positive outcomes for intrinsic interest and creativity. Journal of Organizational Behavior: The International Journal of Industrial, Occupational and Organizational Psychology and Behavior, 30(1), 95-117.
Spielberg, S. (Director). (1991). Hook [Film]. TriStar Pictures.
OpenAI. (2023). ChatGPT [Large language model]. https://chat.openai.com

Lamar Oglesby is the Executive Director of Research Financial Services at Rutgers University. Lamar serves on the NCURA Board of Directors, is a contributing editor for the NCURA Magazine, and is a member of the Region II Steering Committee, serving on the leadership development and nominating committee, as well as the regional program committee. Lamar is currently seeking a doctorate in Business Administration at Drexel University. He can be reached at loglesby@rci.rutgers.edu.


By Lumi Bakos, Emily Devereux, and Alex Teghipco
In 2024, the University of South Carolina (USC) launched a strategic AI-driven project to unlock powerful insights hidden within its sponsored contracts and broader research data ecosystem. The initiative, spearheaded by a project team comprising data scientists and research administrators, marks a transformative step toward addressing the growing need for data-informed decision making to identify research strengths, gaps, and future investments. Although USC had long maintained extensive data repositories across multiple offices— including sponsored awards and contracts—publications, patents, faculty profiles, and research cores existed in silos, varied in format and quality, and lacked standardization.
Data cleaning was a critical step in the project. The project team audited 15 years of sponsored contracts, grant proposals, and award data from a legacy, home-grown routing system. While the volume of data was a clear strength, this very volume underscored the most complex and essential part of the process. Challenges included inconsistent submission formats over time (e.g., 21 different file types and many submissions missing file extensions); changes in faculty, departments, and colleges; and inconsistent naming conventions (e.g., industry sponsors and federal agencies, such as the Department of Defense, with multiple variations).
Several limitations were acknowledged. Contract proposals and bids, as well as awards, were attributed only to principal investigators (PIs), including those leading subcontracts, and not to co-investigators. While all submissions were retained in the dataset, the analysis focused on tenure-track, tenured, and research faculty. Resubmissions were identified
using project titles, which sometimes led to misclassification due to changes in titles.
Once cleaned, the team built a relational database that linked key data points, including faculty profiles, contract bids, proposal submissions, awards, and publication records. A major technical hurdle was entity resolution, accurately connecting records across systems without shared identifiers. The team employed a multi-stage, human-in-the-loop process that combined initial algorithmic matching with Large Language Model (LLM)-based refinement and final validation by human experts to reconcile variations in author names, project titles, and departmental structures.
The integrated dataset enabled advanced analysis. Machine learning models and visualization tools were used to identify trends in research productivity, funding patterns, and collaboration networks. Topic modeling of contract bids and proposals submitted, awards received, and publications highlighted underrepresented areas with growth potential.
The platform also revealed structural inefficiencies. Some departments experienced a gradual decline in funding and proposal submissions, prompting the need for thoughtful discussions with departmental and college leadership to better understand and address the factors contributing to the decrease in research activity. These meetings were not limited to declining departments but also included productive departments and faculty to discuss the next steps in growing their research output. These insights informed recommendations for targeted investment, particularly in interdisciplinary areas such as energy and brain health. As a result,
the university has applied for and received state and federal grants and contracts to grow these two areas.
The project also supported the creation of the Carolina Grants and Innovation (CGI) Hub, USC’s first institutional initiative focused on enhancing research development and training. A key deliverable was an interactive dashboard, now available to university leadership, deans, and associate deans for research. Though still in early development, the dashboard allows users to explore research trends, track progress, and align decisions with institutional goals.
By consolidating fragmented data, applying AI-driven analytics, and prioritizing data quality, USC has laid the groundwork for a more strategic and informed approach to research planning and investment.
A robust and scalable data pipeline was developed to extract standardized information from tens of thousands of contracts, research proposals, and awards. The data import phase required integrating nearly a dozen software packages to handle 21 different file formats. An iterative parsing approach was designed to recover data from files lacking extensions and to salvage information from documents corrupted by legacy system conversions.
Given the substantial volume of image-based data, a dual Optical Character Recognition (OCR) strategy was employed. Google’s Tesseract was used for clear documents, while Meta’s Llama 3 handled complex and poorly formatted cases requiring higher precision.
Data extraction and anonymization utilized a suite of fine-tuned LLMs. Microsoft’s Presidio, combined with a Bidirectional Encoder Representations from Transformers (BERT) model specialized in detecting Personally Identifiable Information (PII), ensured data privacy. For large-scale document processing, Gemini Flash 2.0’s extended context window was essential for accurate data capture.
A multi-stage entity resolution process addressed inconsistencies, including organizational name changes, variations in sponsors, and author aliases, combining a string-matching algorithm with LLM-assisted disambiguation and expert human review.
With the integrated dataset, machine learning models, and visualization tools, researchers analyzed research productivity, funding trends, and collaboration networks. Insights informed the creation of the CGI Hub and guided strategic investment in emerging interdisciplinary research areas.
The AI-driven Research Intelligence Project established a unified research data infrastructure, enabling comprehensive, institution-wide analysis of research activity. By integrating 15 years of fragmented data, the university can now assess research productivity, funding trends, and collaboration networks with improved accuracy and consistency. The analysis identified emerging strengths in areas such as AI in healthcare and renewable energy while also highlighting underrepresented research domains that warrant future investment.
Additionally, inefficiencies—such as departments with declining research outputs—were revealed, informing targeted support strategies. A strong correlation between proposal submissions and awarded grants and contracts underscored the importance of fostering proposal activity. These insights led to the creation of the Hub, a centralized initiative focused on enhancing research development, supporting faculty through training, and promoting interdisciplinary collaboration.
A primary deliverable was an interactive dashboard for university leadership, deans, and associate deans for research. This tool enables the dynamic exploration of institutional research trends, supporting data-driven strategic planning, informed investment decisions, and effective progress tracking.
Long-term impacts include improved resource allocation, enhanced collaboration, and increased competitiveness in external funding. The project lays the foundation for sustained, data-informed decision-making, positioning the university to respond proactively to emerging research opportunities and challenges.
The AI-driven Research Intelligence Project represents a significant advancement in how USC manages, analyzes, and leverages its research data. By transforming fragmented and inconsistent datasets into a centralized, integrated platform, the university has laid the groundwork for sustained, data-informed research planning. This innovation enables leadership to move beyond anecdotal evidence and rely on comprehensive, longitudinal insights to guide strategic decisions.
Currently, the platform is being used to support institutional planning, inform investment in high-potential research areas, and guide the activities of the newly established CGI Hub. The interactive dashboard provides real-time access to research trends, faculty activity, and funding trajectories, helping deans, associate deans for research, and the Office of the Vice President for Research leadership align resources with emerging opportunities and institutional priorities.
The importance of this innovation lies in its ability to make complex research ecosystems more transparent and actionable. It empowers decision makers with tools to evaluate performance, identify gaps, and forecast areas for growth. As the platform continues to evolve, it is expected to play a central role in USC’s efforts to strengthen research capacity, enhance interdisciplinary collaboration, and increase competitiveness in securing external funding. N



Lumi Bakos, PhD, is the Associate Vice President for Research Operations, at the University of South Carolina. Dr. Bakos oversees strategic research administration, infrastructure optimization, and faculty development to advance institutional research capacity, external funding success, and cross-disciplinary collaboration. She can be reached at bakos@email.sc.edu.
Emily Devereux, DPA, CRA, is Associate Vice President for Research Development at the University of South Carolina, overseeing the Carolina Grants & Innovation Hub. A past NCURA Board member and Region III Chair, she advances research development and training for institutional research growth. She can be reached at devereue@mailbox.sc.edu.
Alex Teghipco, PhD, is a Data Scientist Consultant. Dr. Teghipco traded brain scans for scientometrics. Wielding machine learning and tool making expertise from his cognitive neuroscience background, he now crafts applications and systems that turn overwhelming datasets into clearer insights. His mission: making complex data sing for research administrators who need answers, not algorithms.

• Administering Research Contracts
• Compensation-Personal Services: Documenting and Supporting Salary Charges to Federal Award
• Crafting Fully Compliant Budgets
• Financial Management of Sponsored Program Awards
• How to Effectively Manage an Audit
• Improving Efficiencies: Assessing the Sponsored Research Operation
• International Research Collaborations
• A Primer on Business Continuity
• A Primer on Export Controls
• Understanding & Managing Sponsored Program Administration at Predominantly Undergraduate Institutions


Denise Wallen, Research Officer & Research Assistant Professor, University of New Mexico is the 2025 recipient of the Robert C. Andresen Outstanding Achievement in Research Administration Award. This award recognizes a current or past NCURA member who has made 1) noteworthy contributions to NCURA, and 2) significant contributions to the profession of Research Administration. First awarded in 1994, this award is NCURA’s highest honor.
During Denise’s 45 years of NCURA membership, she has made countless contributions to NCURA and the research administration profession. Denise has served as NCURA President, on the Board of Directors, Chair of the Nominating & Leadership Development Committee, Chair of the Professional Development Committee, a member of the Select Committee on Peer Review and a member of the Select Committee on Global Affairs. She has also served as a Co-Chair of the Annual Meeting, as a faculty member for the Level I: Fundamentals of Sponsored Projects Administration Workshop and as a member of many conference program committees. Denise continues to serve as an NCURA Peer Reviewer. Denise received NCURA’s Julia Jacobsen Distinguished Service Award in 2008.
Rosemary Madnick, Lundquist Institute for Biomedical Innovation, and Georgette Sakumoto, Emeritus, University of Hawaii, shared Denise has made extraordinary contributions to the field of research administration, demonstrating unparalleled dedication and exemplary service to the NCURA community. Denise embodies the spirit of the Robert C. Andresen Award, reflecting the high standards of volunteerism and leadership that Bob exemplified during his remarkable career. Her work resonates with the values of NCURA, fostering a culture of collaboration, excellence, and service.
David Mayo, California Institute of Technology, says I have known Denise for over 23 years, and I cannot think of anyone who has devoted more effort to supporting and advancing NCURA and the field of research administration, nor anyone more deserving of this recognition—NCURA’s highest accolade. Through consistent mentoring, Denise has positively impacted the professional lives of many research administrators who have gone on to become leaders at both the regional and national level, as well as serving as productive leaders at their institutions.
Kris Monahan, Harvard University, adds Denise has done so much for NCURA and for research administration with her ongoing mentoring and support. She has an amazing sense of how to “talent scout” for NCURA and research administration future leaders and to support, mentor, and guide. I couldn’t think of a better colleague, mentor, and friend to receive this award. Bob Andresen would undoubtedly be proud to see Denise receive this award and she displays all the characteristics of lifelong research administrator— passion, leadership, resilience, and an amazing human being who cares about NCURA and its members.
Tony Ventimiglia, Auburn University, contributed that Denise Wallen has a long-standing commitment and history of supporting the profession of research administration. Her commitments incorporate sustaining global partnerships and collaboratively identifying solutions to the needs of research administrators internationally. Denise truly is deserving of this prestigious award, and I am very pleased to support this nomination.
Carolyn Hushman and Kristopher Goodrich, University of New Mexico, shared
Denise has made exceptional contributions to research administration here at the University of New Mexico (UNM), especially within the College of Education and Human Sciences (COEHS). Denise embodies the spirit of the Robert C. Andresen Award, reflecting her dedication to integrity, efficiency, and innovation in research administration across her career. We strongly believe that her contributions have had a lasting impact on the field of research administrators and the individual people she has worked so hard to support.
On receiving the award, Denise states, “This award is an honor and humbling experience. This recognition reinforces the importance of commitment to research administration, and our collective dedication to innovative solutions within our profession. I am grateful to have been able to be part of this journey. And I thank NCURA for this recognition.”
Denise Wallen will receive the Robert C. Andresen Outstanding Achievement in Research Administration Award on Monday, August 11, 2025, during the 67th Annual Meeting Award Ceremony.

This year the NCURA Nominating and Leadership Development Committee selected five veteran NCURA members to receive the Julia Jacobsen Distinguished Service Award. This award recognizes members who have made sustained and distinctive contributions to the organization. Each recipient has contributed to NCURA’s success in numerous ways and for many years. The following summaries provide a snapshot of their service and contributions in addition to the many presentations they have made at regional and national meetings and conferences over the years. The 2025 Award recipients are:




Scott Davis, Director, Research Administration Office, University of Oklahoma Health Sciences Center. In Scott’s 28 years of NCURA membership, he has served on the Board of Directors, as Chair of Region V, as a Contributing Editor for NCURA Magazine, and as a member of the FRA Conference Program Committee. Scott has served on Region V’s Mustang Mentoring Program Committee and as a mentor for the program. Scott received Region V’s Distinguished Service Award in 2019. As a recipient of this award, Scott states, “Volunteering with NCURA has been a cornerstone of my career, providing invaluable friendships and a sense of purpose. Over the past 29 years, I’ve had the privilege of contributing to research administration, helping to improve global health in a small but meaningful way.”

Laura Kingsley, Director, Office of Sponsored Programs, University of Pittsburgh. Since Laura became an NCURA member in 2012, she has served on the Board of Directors and on Region II’s Steering Committee. Laura has authored articles for NCURA Magazine and NCURA’s Sponsored Research Administration: A Guide to Effective Strategies and Recommended Practices publication. Laura currently serves as a faculty member for Level II: Sponsored Projects Administration - Critical Issues in Research Administration Workshop and as a member of the 2026 FRA Conference Program Committee. Laura shares, “What sets NCURA apart is the shared commitment to lifting each other up through service and volunteering. That’s why being recognized with the Julia Jacobsen Distinguished Service Award is so meaningful to me—though I often feel that what I’ve received from this community far outweighs what I’ve contributed.”

Katherine Kissmann, Associate Executive Director of Sponsored Research Services, Texas A&M University. During her 30 years of NCURA membership, Katherine has served on the Board of Directors, as Chair of Region V, as a member of the Nominating & Leadership Development Committee, and as a member of the Professional Development Committee. Katherine has also served on the Pre-Award Research Administration Conference Program Committee, Annual Meeting Program Committee, and as a Contributing Editor for NCURA Magazine. Katherine is a graduate of NCURA’s Executive Leadership Program. Katherine currently serves as a member of the Board of Directors and as Chair of Region V’s Mustang Mentoring Program. As a recipient of this award, Katherine adds, “I am genuinely honored to be recognized for my service to NCURA, which has deeply enriched both my professional career and personal development. Through NCURA, I have found meaningful connections and the lasting impact of contributing to a community dedicated to shared growth. I am especially grateful to those who guided and supported me along the way, opening doors and encouraging my involvement in the organization. It is a privilege to give back by supporting and mentoring the next generation of research administrators and emerging leaders, continuing the cycle of service and opportunity that has defined my NCURA experience.”
Tim Schailey, Assistant Vice President, Research Support Services, Office of Research Administration, Thomas Jefferson University. As a member of NCURA since 2008, Tim has served as NCURA’s Treasurer, Chair of the Financial Management Committee, and as Chair and Treasurer of Region II.Tim has also served on the Board of Directors and as a faculty member of the Level I: Fundamentals of Sponsored Projects Administration workshop. Tim is a graduate of NCURA’s Executive Leadership Program
and received Region II’s Distinguished Service Award. In reaction to the award, Tim says, “I’m truly honored to receive this recognition, and I’m especially grateful for the support and encouragement I’ve received along the way. I’ve had the privilege of learning from colleagues whose example has shown me what it truly means to serve. Being part of a community where we work together to make a difference is something I deeply value. This acknowledgment means a great deal to me.”
Roger Wareham, Director, Grants and Research, University of WisconsinGreen Bay. In his 23 years as a member of NCURA, Roger has served as Chair and Treasurer for Region IV, and as a member of the Professional Development Committee. Roger served as Co-Chair of the Pre-Award Research Administration Conference, as well as serving as a member of the Annual Meeting and Pre-Award Research Administration Conference program committees. Roger co-authored the NCURA publication Understanding & Managing Sponsored Program Administration at Predominantly Undergraduate Institutions as well as multiple articles for NCURA Magazine. Roger is a graduate of NCURA’s Executive Leadership Program. Roger is currently the Vice Chair of the Select Committee on Peer Programs/Peer Review and a Peer Reviewer. In reaction to the award, Roger states, “NCURA has opened so many doors for me—both regionally and nationally. I have gained so much, both as a member and a volunteer. And, I have made so many lasting relationships along the way. This award caps years of wonderful experiences!”
The Distinguished Service Award recipients will be recognized during the upcoming 67th Annual Meeting Award Ceremony on Monday, August 11, 2025. Please join us in thanking them for their service and their contributions!

The NCURA Distinguished Educator designation recognizes exceptional contributions through the ideation, creation, and delivery of NCURA’s national and global research administration educational offerings. An individual holding the Distinguished Educator designation is one who has had a major impact on multiple educational levels in professional development within NCURA. This year the recipients are:


Anne Albinak has served as NCURA’s Treasurer, on the Board of Directors, as Chair of the Education Scholarship Fund Select Committee, as a member of the Nominating & Leadership Development Committee, and as a member of the Professional Development Committee. Anne has served as Co-Chair of the Financial Research Administration Conference and as a faculty member of the Financial Research Administration Workshop. Anne is a graduate of NCURA’s Executive Leadership Program and received Region II’s Distinguished Service Award as well as the NCURA Julia Jacobsen Distinguished Service Award.
Rashonda Harris has served on the Board of Directors, as Chair of the Collaborate Communities, as Copy Editor for NCURA Magazine, and as a member of the Professional Development Committee. Rashonda served on the Presidential Task Force on Diversity and Inclusion, as a faculty member for the Financial Research Administration workshop, and on the Annual Meeting and Financial Research Administration Conference program committees. Rashonda is a graduate of NCURA’s Executive Leadership Program and received Region III’s Distinguished Service Award as well as NCURA’s Julia Jacobsen Distinguished Service Award. Rashonda currently serves as a faculty member for the Departmental Research Administration Workshop.

Rosemary Madnick has served as NCURA’s President, as Chair of the Nominating & Leadership Development Committee, as Chair of the Select Committee on Peer Programs/Peer Review, and as a member of the Professional Development Committee. Rosie has served as Co-Chair of the Annual Meeting and the Pre-Award Research Administration Conference, as well as Co-Editor of NCURA Magazine. Rosie is a graduate of NCURA’s Executive Leadership Program and NCURA’s Leadership Development Institute. Rosie was the recipient of NCURA Region VI’s Helen Carrier Distinguished Service Award and NCURA’s Julia Jacobsen Distinguished Service Award. Rosie currently serves as a member of the Board of Directors, as a Peer Reviewer, and as a faculty member for the Level I Fundamentals of Sponsored Projects Administration workshop.
We’ve been dying to share this exciting update with you, and the wait is finally over!
You may already know that the NCURA Education Scholarship Fund has, for several years, supported members pursuing a master’s degree in research administration. Each scholarship is valued at $2,500, and we typically award two in the spring and two in the fall through a competitive application process.
But, here’s the big news…
So, what’s next?
✔ Check our website in August for the updated details and application guidelines.
✔ Join us at the 67th Annual Meeting in August to hear more.

Starting this fall, we’re expanding the scholarship program to be more inclusive—and that means all who are pursuing higher education in research administration through a certificate, certification, or undergraduate studies will be eligible to apply!
Yes, you read that right! Whether you’re just starting your academic journey or building on your current qualifications with a certificate, NCURA wants to support you. The Scholarship Committee is currently hard at work reviewing eligibility criteria, designing an updated application process, and ensuring a fair and transparent selection system for all applicants.

✔ Have any questions, reach out to ESFSC staff liaison Audrey Nwosu at nwosu@ncura.edu
Thinking of applying? Great! Here’s what you’ll need to prepare:
• Proof of enrollment in an eligible educational program
• A current resume (no more than two pages)
• A short personal statement explaining why you’re pursuing your education

We also want to hear how you’ve engaged with NCURA in the past and how you plan to contribute to our community in the future. Your story is a big part of what makes your application shine!
Visit the NCURA scholarship webpage at: www.ncura.edu/Education/EducationScholarshipFund/ ApplyforEducationScholarshipFund.aspx
We’re incredibly excited about this next step for the scholarship program—and even more excited to help more of you achieve your educational goals. Don’t miss this opportunity!
As a reflection of NCURA’s commitment to Diversity, Equity, and Inclusion, we strive to achieve diversity in all aspects of appointments, including experience, geography, institution type, gender, and ethnicity. We encourage ALL interested members to get involved in NCURA.

Charles Shannon is Head of Research and Innovation Operations at Loughborough University, UK. With extensive experience in research funding, compliance, and strategy, he supports institutional research sustainability, governance, and international collaboration across academia and funding bodies. He can be reached at C.Shannon@lboro.ac.uk.
By Michelle D. Christy
In a period marked by dramatic shifts in federal research policy, institutions are under increasing pressure to adapt quickly, strategically, and transparently. This article (based on my May 2025 Federal Demonstration Partnership (FDP) presentation) provides an overview of the recent major regulatory developments impacting research at universities and nonprofit institutions and offers a pragmatic framework for managing regulatory turbulence, assisting institutions to remain compliant and effective.
At the heart of institutional resilience is strong internal coordination. Many institutions have formed steering groups made up of stakeholders from sponsored programs, sponsored accounting, legal counsel, deans, compliance, and sometimes even the board of trustees. These crossfunctional teams are crucial for assessing policy impacts and crafting timely, coordinated responses. For example, the team could lead or support internal communication and responses to institution-wide issues such as addressing diversity, equity, and inclusion concerns; responding to indirect caps in proposals and current projects; and other policy directives and requirements issued by the federal government.
Research agencies and other government offices communicate with institutions through various channels, including PIs, general counsel offices, VPRs, and sponsored programs. Institutions should consider designating a single office to handle all project-related communications. Also consider centralizing responses to agencies for project-specific directives from federal agencies so that sponsors receive accurate, vetted information, even if that means slowing responses to prioritize accuracy.
A significant development across federal agencies is the increased demand for justification of Line of Credit (LOC) draws, as mandated by Executive Order 14222—Implementing the President’s “Department of Government Efficiency” Cost Efficiency, aimed at improving transparency in the use of federal funds. Institutions are now required to provide detailed, itemized explanations for expenditures, including the project’s purpose, cost breakdowns by line items, and an explanation of how each expense benefited the project. Agencies are applying the EO differently.
This is an evolving issue, and it is unclear where the new practices will ultimately settle. In the meantime, institutions should consider the following to help reduce delays in drawdowns:
• Separate draws by federal agency, even when draws are submitted to the same payment portal.
• Separate draws for discretionary and non-discretionary projects (when known)—most assistance awards are discretionary; this is a new area for agencies, and sponsors are still working out these kinks.
• Separate draws for terminated projects, even if an appeal has been filed.
Furthermore, when communicating with federal agency representatives, it may be helpful to remind them that agencies direct, review, or approve project-specific budgets and expenses at several points during the awarding process and the period of performance, specifically:
• In the NOFO, which stipulates allowable and unallowable expenses.
• When reviewing the proposed budget at the time of project review.
• When the agency issues the award, which includes a detailed line-item budget.
• In quarterly/annual financial reports.
• In the annual progress report.
Assistance award recipients are also able to rebudget costs under the Uniform Guidance and agency policies as long as there is no change in scope.
Additionally, there are several institution-level checks in place to ensure compliance. Institutions implement internal controls in financial processes to ensure costs are properly charged and reviewed regularly. Recipients sign statements for each drawdown, certifying that the expenses are allowable, allocable, and reasonable. Institutions undergo annual audits for compliance with the Uniform Guidance, which are posted in the federal clearinghouse (census.gov). And when the agency’s Inspector General audits institutions, reports are available to the public.
One of the most consequential policy developments is the 15% cap on indirect cost (IDC) reimbursement introduced by multiple agencies (NIH, NSF, DOE, DOD). As of the date of this article, the lawfulness of the cap is
under judicial review. Still, institutions must decide how to handle proposal submissions and new terms that reference future implementation of the cap. Most institutions report budgeting the full negotiated rate in the proposal, along with a narrative explaining the institution’s position should the cap apply, e.g. negotiate with the sponsor or withdraw the proposal.
Several high-level discussions are in progress to explore new IDC models, e.g., the Joint Associations Group (JAG), suggesting that institutions may need to rethink their financial strategies for sustaining research operations in the face of reduced indirect recoveries.
on Foreign Subawards
NIH’s recent announcement (NOT-OD-25-104) indicates a pause on foreign subawards—whether new or ongoing—until new collaboration policies are finalized. Institutions are not allowed to submit grant applications that include foreign collaborations. NIH states this delay is temporary. The restrictions apply even to previously approved foreign subawards and potential foreign components.
All foreign collaborations on NIH awards must be re-approved either at the time of renewal or if NIH contacts the recipient institution. Institutions should consider replacing existing foreign collaborators with a US institution or converting the foreign collaboration into a vendor relationship (with corresponding changes to the work scope). However, there is some unofficial news that NIH could be more flexible when it comes to clinical trials being carried out abroad.
NIH states that it will be ready to accept applications with non-US subawards in the fall. It appears that NIH plans to implement a model like the NSF collaborative proposal, whereby NIH will issue direct awards to non-US entities. NIH is expected to provide additional guidance.
and Stop Work Orders: Rights, Risks, and Appeals
Termination of federal assistance awards—once a rare event—has become a more prominent policy tool. Under 2 CFR 200.340, awards can be ended not only for noncompliance but also if they no longer support agency priorities. Agencies must give written notice and provide recipients with the opportunity to object or appeal. Institutions might consider the following when deciding whether to appeal:
• Does the termination/suspension state why the specific project no longer meets agency priorities?
• Is it possible to make the case that the science remains vital to the country?
• To what extent does the project focus on topics identified in the executive orders? Can the project be revised to exclude certain activities?
• What is the institution’s appetite for risk?
• How much time remains until project completion?
If the institution chooses to appeal, consider including the following information in the letter to the sponsoring agency:
• Acknowledge receipt of the agency’s project termination letter.
• Discuss the scientific aims of the agency program and how the project responds to it.
• Describe how the science is vital to the country and the harm that would occur if the project were terminated (e.g., to study participants, the public).
• Quote the language in the termination letter.
• Build the argument for the appeal, including:
• The legal framework—the agency’s mission and policy, and the Uniform Guidance language for terminations
• Contest the agency’s basis for termination—the agency did not justify the termination, or it does not match the project, the project could be amended, the termination violates statutes, the
termination relies on regulations that don’t apply to the project, termination would violate one or more court orders, or several other legal reasons.
The AOR or another authorized institutional signatory usually signs the appeal.
For federal contracts, refer to the Federal Acquisition Regulation (FAR) 52.242-15 for stop work orders, and the FAR 52.249 series for termination for convenience clauses. In either case, the government must provide a clear justification for its actions.
Stop-work orders function as temporary pauses. Institutions should consider using the stop work period to make the justification as to why the project should be restarted.
There is no appeals process for a contract termination, but institutions can request reimbursement for unpaid expenses under the Contract Disputes Act (249-2).
With terminations and suspensions come financial complexities about the appropriate wind-down costs. Institutions must carefully consider the allowability of expenses incurred during or after termination. The Uniform Guidance (2 CFR 200.472) provides a list of potential allowable costs, including labor, legal fees, idle time, and costs that the institution would not have to pay but for the termination (e.g., severance, cancellation costs of leases and hotel contracts).
• Institutions are urged to develop guidance on terminations and apply that guidance consistently to help support the allowability of the costs.
• Where projects include subawards, request documentation and justification of wind-down costs.
The considerations discussed here highlight the importance of institutional teamwork, financial stewardship, policy awareness, and effective communication that are critical for acting with clarity, unity, and foresight. The changing federal landscape requires greater institutional agility, legal understanding, and operational resilience. By investing in internal collaboration, careful financial planning, and informed risk-taking, institutions can continue to fulfill their research missions, even amid shifting federal mandates. N
Resources
Navigating a Shifting Landscape: Institutional Strategies in Response to Federal Research Policy Changes
Terminations
Ropes & Gray-Grant Termination Appeals: Process and Strategies
Inside HigherEd Navigating the NIH Grant Cancellation: Options for Researchers
COGR Costing Points to Consider for Terminations and Suspensions (COGR portal access required)
AAUP Center for the Defense of Academic Freedom, Understanding the Law and Policies for Grant Terminations for the NSF
Defend the Spend
COGR Points to Consider for Reimbursement of Expenses Under Active Grants (COGR portal access required)

Michelle D. Christy, Principal, Linden Insights, LLC, has more than twenty-five years in research management and compliance. Michelle has developed innovative solutions to address research management problems related to risk management, research compliance, contract negotiations, and operational excellence at top research institutions including MIT and Princeton University. She is a trusted partner and advisor to organizational leaders nationally. She can be reached at michelle@lindeninsights.com.
Seamlessly manage your full grant and research administration lifecycle – so your team can get back to making a difference, faster.
Streamlined program management through intuitive dashboards and configurable workflows
Organization-wide transparency for informed project oversight and alignment

Stay audit-ready with stress-free compliance and precise reportingevery step of the way
Built and backed by experts with hands-on industry experience
The most trusted name in research and grant administration software
Full grant and research lifecycle from proposal to closeout
Over 700 clients in higher education, healthcare, and more

Research Administrators at predominantly undergraduate institutions (PUIs) face unique challenges due to their small office size, management of a broad array of roles and funders, and sometimesinadequate administrative infrastructures. PUI faculty have high
By Claudia Scholz
There are several reasons to be skeptical about these new tools.
• AI is not as good as it’s made out to be. Text generation tools use statistical prediction to complete sentences. These “stochastic parrots” (Bender et al., 2021) might give you made-up citations (Claburn, 2025), “dangerous misinformation” (Hall, 2025), stereotypes (Williams, 2025), overgeneralizations (Peters & Chin-Yee, 2025), or creepy personal comments (Wiggers, 2025).
AI tools are exciting and useful but should be approached with caution.
Generative AI as we know it today (Padmaja et al., 2024; Wiggins & Jones, 2023), came to the forefront of public attention relatively recently with the widespread adoption of OpenAI’s ChatGPT. This text generator soon faced competition from Anthropic’s Claude, Meta’s Llama, Microsoft’s CoPilot, Google’s Gemini, Perplexity.ai and international competitors like Deepseek. Vendors serving the RA market have quickly jumped on the AI bandwagon, pushing out new tools that purport to use AI to find the right opportunities (atomgrants.com [2025]), to assure regulatory compliance (scytale.ai [2025]) and even to write proposals (grantable.co [2025]). Colleagues at R1 institutions are leveraging their research strengths to implement bespoke AI tools for RA. Emory University developed ORAgpt, a chatbot trained on standard operating procedures that can answer user questions about RA tasks (Wilson et al., 2025). The National Science Foundation awarded $4.5 million to the University of Idaho and Southern Utah University (Award 2427549) to develop “trustworthy AI-powered tools to automate manual processes, reduce errors, and augment the capabilities of research administrators” (Reagan, 2025). Meanwhile, institutions, funders and regulatory agencies are mobilizing to put rules in place about which AI applications and tools are prohibited, permissible, or recommended.
• AI has invisible costs. Generative models are computationally intensive; this rapidly turns older chips into e-waste. Data centers are outpacing household energy consumption in some markets, and then there’s the water use and noise complaints from neighboring communities (O’Donnell & Crownhart, 2025).
• LLMs fail to protect Privacy or Intellectual Property. Most text generation tools add user prompts and questions to their training data. Since large language models (LLMs) can be induced to reveal fragments of their training data (Su et al., 2024), your inputs may become part of a future users’ outputs. Moreover, AI-generated text cannot be copyrighted, limiting the property rights you or your faculty colleague can assert over the generated content.
AI-generated text cannot be copyrighted, limiting the property rights you or your faculty colleague can assert over the generated content.
These problems notwithstanding, whether out of curiosity, or to stay abreast of professional skills, here are a few things to keep in mind when exploring AI tools:
Use AI Responsibly
• Choose the right AI tool for the job.
• Become proficient at prompt engineering.
• Use retrieval-augmented GPTs.
• Never let a GPT write a final draft. teaching loads and face barriers to research, which means they may lack experience in proposal writing and award management; this puts more responsibility on Research Administration (RA) staff. RA professionals at PUIs have built powerful networks of peer support—including Colleges of Liberal Arts Sponsored Programs (CLASP) and the PUI affinity groups in National Council of University Research Administrators (NCURA), Society of Research Administrators International (SRAI), and National Organization of Research Development Professionals (NORDP)—but still sometimes wish for an extra set of hands to manage these challenges. It’s hard to hear about the emerging Artificial Intelligence (AI) tools and not wonder if help might finally be at hand.
• Choose the right AI tool for the job. While ChatGPT is the most widely known of the generative AI tools, it might not be the right tool for your specific task. Even within ChatGPT, you can select GPTs
• Protect your IP and privacy.
trained to generate specific kinds of outputs. Take some time to familiarize yourself with these options before starting with a generic AI.
• Become proficient at prompt engineering. To avoid going backand-forth with the GPT to generate the text you need, start by giving it adequate information in a style it can easily understand. A successful prompt usually contains an instruction, some context, the desired output format (e.g. bullet points in English; paragraph in the style of Ernest Hemingway). You might want to give the GPT some examples of what you want, or some warnings about what to avoid. Affirmative statements are easier for the GPT to parse than negative statements (Bens, 2024; Kambey, 2025).
• Use retrieval-augmented GPTs. Avoid the hallucination problem by making the GPT cite its sources. GPTs like Scispace and Jenni.ai will retrieve and incorporate only academic work. You can even have it write the response based only on your preferred sources. These tools are limited in what the GPT has access to, so paywalled works won’t be included. (You can upload documents to these systems, but make sure to review the terms and conditions to understand what the company intends to do with your uploaded documents.)
• Protect your IP and privacy. Avoid entering personal information, intellectual property (yours or someone else’s), preliminary data, or anything else that might be sensitive or proprietary
• Never let a GPT write a final draft. GPT is a tool, not a replacement for human insight, contextual knowledge, or style. Always revise the output. Never cut and paste.
“Try it” to become comfortable with generative tools As with any new software application, it is a good idea both to learn it and try it before using it for something official. (Remember the first time you used PowerPoint?) Here are some example tasks to help you test some of these AI tools for yourself. All the referenced tools are cost-free or have a free tier available.
Example 1
Level 1: Large Language Models
Your new coworker: Goblin Tools
PUI challenge: You are struggling with how to respond to a faculty member’s request, which you find unreasonable. You’re an office of one and don’t have anyone to discuss this with.
Try: https://goblin.tools
Write your email response to the faculty member and ask Goblin tools to assess its tone. Goblin is designed especially to help neurodivergent individuals with interpretive tasks as well as organization and productivity skills.
Example 2
Level 10x: Retrieval Augmented AI
Your new coworker: NotebookLM
PUI challenge: You are onboarding a new staff member and are inundated with questions about award management.
Upload your policies and SOPs to NotebookLM. Add any public documents that might be helpful, such as Uniform Guidance. (Google promises not to incorporate your uploads into its training mode, though be careful not to upload any sensitive or proprietary documents.) Enter your trainee’s questions into NotebookLM, then review the results. If you are satisfied that the GPT is giving good enough answers, then ask your trainee to ask the GPT first. This tool will generate study guides, timelines, and even uncanny “podcasts” of two voices discussing the uploaded texts.
AI tools are exciting and useful but should be approached with caution. They have the potential to offer small RA offices an extra set of ‘hands.’ If used sparingly and strategically, GPTs could become a junior coworker to help you offer your researchers faster, more comprehensive, and customized services. N
References Atomgrants [Computer Software]. [2025]. Retrieved from https://atomgrants.com/. Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi. org/10.1145/3442188.3445922
Bens, A. (2024, February 20). Prompts, prompt engineering and tips for prompt writing [Video]. NCURA Youtube Channel YouTube https://www.youtube.com/watch?v=u6abUvB9Xd8
Claburn, T. (2025, February 25). LLM aka large legal mess: Judge wants lawyer fined $15K for using AI slop in filing. The Register. https://www.theregister.com/2025/02/25/ fine_sought_ai_filing_mistakes/
Grantable [Computer Software]. [2025]. Retrieved from https://www.grantable.co Hall, R. (2025, May 4). ‘Dangerous nonsense’: AI-authored books about ADHD for sale on Amazon. The Guardian. https://www.theguardian.com/technology/2025/may/04/dangerous-nonsense-ai-authored-books-about-adhd-for-sale-on-amazon
Kambey, F. J. (2025, May 14). The anatomy of a perfect AI prompt: goal, return format, warnings, and context dump. Medium. https://medium.com/@feldy7k/the-anatomy-ofa-perfect-ai-prompt-goal-return-format-warnings-and-context-dump-893354da0205
O’Donnell, J., & Crownhart, C. (2025, May 20). We did the math on AI’s energy footprint. Here’s the story you haven’t heard. MIT Technology Review. https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/
Padmaja, C. V. R., Narayana, S. L., Anga, G. L., & Bhansali, P. K. (2024). The rise of artificial intelligence: A concise review. IAES International Journal of Artificial Intelligence, 13(2). https://doi.org/10.11591/ijai.v13.i2.pp2226-2235
Peters, U., & Chin-Yee, B. (2025). Generalization bias in large language model summarization of scientific research. Royal Society Open Science, 12, 241776. https://doi. org/10.1098/rsos.241776
Reagan, M. (2025, March 20). U of I awarded $4.5 million grant to pioneer generative AI Tools for Research Administration. AI4RA University of Idaho. https://ai4ra.uidaho. edu/u-of-i-awarded-4-5-million-grant/ Scytale. [Computer Software.] [2025]. Retrieved from https://scytale.ai Su, E., Vellore, A., Chang, A., Mura, R., Nelson, B., Kassianik, P., & Karbasi, A. (2024). Extracting memorized training data via decomposition. arXiv preprint arXIV: 2409.12367. https://doi.org/10.48550/arXiv.2409.12367
Wiggers, K. (2025, April 29). OpenAI explains why ChatGPT became too sycophantic. TechCrunch. https://techcrunch.com/2025/04/29/openai-explains-why-chatgpt-became-too-sycophantic/ Wiggins, C., & Jones, M. L. (2023). How data happened: A history from the age of reason to the age of algorithms. W. W. Norton & Company.
Williams, R. (2025, April 30). This data set helps researchers spot harmful stereotypes in LLMs. MIT Technology Review. https://www.technologyreview.com/2025/04/30/1115946/ this-data-set-helps-researchers-spot-harmful-stereotypes-in-llms/
Wilson, L. A., Konsynski, B., & Yisrael, T. (2025). Advancing research administration with AI: A case study from Emory University. SRAI Journal of Research Administration Blog, LVI (2). https://www.srainternational.org/blogs/srai-jra2/2025/05/22/advancing-research-administration-with-ai-emory
Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. Hachette Book Group.

Claudia Scholz, (previously of PUIs Spelman College and Trinity University in Texas), is now the Director of Research Development in the University of Virginia School of Data Science where she is part of a team of four. Claudia wrote this article with AI assistants Perplexity, NotebookLM and SciSpace. Claudia can be reached at cws3v@ virginia.edu.
By Barrie D. Robison, Nathan Layman, Jason Cahoon, Luke Sheneman, Sylvia Bradshaw, Katie Gomez, Nathan Wiggins, and Sarah S. Martonick

The confluence of increasing regulatory and reporting demands, capped administrative budgets, and immediate productivity gains offered by Artificial Intelligence (AI) drives the diffusion of AI into the Research Administration (RA) workforce. AI tools can empower RA enterprises to meet these ever-increasing demands, but not without a catch.
There are significant differences between an individual’s use of AI tools, such as ChatGPT, and systematic institutional implementations of AI systems. Dell’Acqua et. al. (2023) introduced the concept of a “jagged technological frontier” defining the potential of AI tools to benefit human knowledge and enterprise. Tasks such as document summarization are “within the frontier” if AI can improve them; “outside the frontier” if current AI tools are not useful; and “at the frontier” if the utility of AI is unknown or still emerging. In characterizing the frontier as “jagged,” they suggest that the utility of AI capabilities remains unresolved for many tasks. Furthermore, the frontier isn’t static; the release of each new model shifts the boundary. These dynamics are particularly relevant for RA workflows, which demand rigorous evaluation and strategic implementation to integrate AI at scale.
In this article, we explore the implications of navigating this frontier within RA, highlighting the opportunities, challenges, and strategic considerations necessary for successful, systematic adoption of AI technologies. We argue that the future of AI in RA lies not in isolated personal productivity gains, but rather in scalable, reproducible, and secure workflows.
Common AI solutions like ChatGPT, Claude, or Gemini, which offer individuals the opportunity to augment administrative tasks, are compelling
tools for research administrators. Despite their capabilities, generalpurpose AI tools fall short because off-the-shelf chatbots aren’t built for the specialized, context-specific needs of institutional research administration. These models operate independently from core RA infrastructure, inhibiting a deeper understanding of specific sponsor guidelines, institutional policies, and complex compliance requirements. Administrators also encounter output inconsistency and uncertainty when they employ commercial AI outputs, which can vary based on model choice and prompt construction. Uncertainty in model performance compromises key administrative tasks, many of which require consistent outcomes to ensure standardization, reproducibility, and reliability. Additionally, RA workflows routinely handle sensitive, confidential, and proprietary information. Submitting such data to external AI platforms risks violating internal policies, sponsor requirements, and data privacy regulations. Individual subscriptions can be economical for personal use, but scaling this approach to an entire RA enterprise can quickly become expensive, unpredictable, and ungovernable. With RA workflows, tasks that require institutional context, consistency, security, and efficiency map unpredictably across the “jagged” technological frontier. These limitations underscore the need for institutional strategies to augment RA workflows with AI.
Despite their capabilities, general-purpose AI tools fall short because off-the-shelf chatbots aren’t built for the specialized, context-specific needs of institutional research administration.
Transitioning from individual AI experimentation to robust institutional deployment requires a strategic framework that addresses the limitations of personal-use tools. Successful, scalable AI integration in RA hinges on satisfying four essential pillars: Accuracy, Flexibility, Reproducibility, and Security. These pillars ensure that AI deployments are reliable, adaptable, and compliant.
• ACCURACY: Ensuring Precise Data Extraction and Transformation. In RA, precision is paramount. Institutional AI systems must be rigorously validated for accuracy, particularly in tasks involving data extraction from complex documents. These systems must accurately capture all relevant data and minimize or eliminate the known issue of “hallucinations,” the generation of plausible but factually incorrect information.

• FLEXIBILITY: Adapting to Diverse Data Ecosystems and Evolving Workflows. The RA landscape is dynamic and complex, with data spread across multiple systems. Effective AI solutions must integrate with existing institutional systems, process diverse data types, and be updated or retrained efficiently. Additionally, AI solutions must adapt flexibly to changing processes and regulations.
• REPRODUCIBILITY: Delivering Consistent Results Across Repeated Inputs and Institutional Users. RA relies on standardized processes and predictable outcomes. AI tools deployed at an organizational level must exhibit high reproducibility, meaning they generate consistent outputs from the same inputs, regardless of who initiates the task or when it is performed. It is crucial that the methods used to generate AI outputs are documented within the institutional record.
• SECURITY: Alignment with Organizational IT Policies and Data Governance. RA workflows are conduits for highly sensitive information. Institutional AI implementation must strictly adhere to the organization’s established IT security policies, data governance frameworks, and relevant regulatory requirements. AI tools must be deployed within secure institutional environments such as approved cloud platforms or campus systems.
Adhering to these four pillars provides a robust foundation for developing and deploying AI solutions that move beyond isolated productivity gains. They enable institutions to build scalable, secure, and reliable AI-powered workflows that reliably enhance the efficiency, quality, and compliance of RA operations.
RA relies on standardized processes and predictable outcomes. AI tools deployed at an organizational level must exhibit high reproducibility, meaning they generate consistent outputs from the same inputs, regardless of who initiates the task or when it is performed.
“Crossing the Innovation Valley of Death”: An Institutional Case Study at the University of Idaho
The University of Idaho (UI) developed its roadmap to implementing institutional AI usage through collaborative efforts between the Office of
Sponsored Programs (OSP) and the Institute for Interdisciplinary Data Sciences (IIDS). The collaborative work between the OSP and IIDS was funded in part by a $4.5 million National Science Foundation grant for their project: “Crossing the Innovation Valley of Death: Democratizing Data and Artificial Intelligence for Research Administration” (informally dubbed AI4RA) (University of Idaho, [2024]).
This project brought together data scientists, web developers, and RA professionals to form a community of practice focused on sharing knowledge, identifying barriers to AI integration and guiding the development of practical institutional solutions such as implementation of locally hosted AI infrastructure and development of “The Vandalizer Framework,” a custom suite of AI-powered tools.
• Local AI Infrastructure: The Foundation for Secure and Scalable AI. The UI is strategically committed to locally-hosted AI infrastructure. Our AI Inference Cluster provides a dedicated environment optimized for secure, real-time AI inference. This cluster consists of high-performance servers running dozens of open-source models. Local infrastructure keeps sensitive university data on campus and supports flexibility and cost-efficiency by allowing for the customization of models for specific tasks and reducing reliance on expensive commercial services. This infrastructure provides the scalability needed to support diverse administrative and research applications, forming the bedrock upon which specific applications are built.
• The Vandalizer Framework: Designed for Institutional Needs. IIDS developed “Vandalizer,” a custom AI-powered framework specifically designed for RA workflows named after the UI mascot, Joe Vandal. Unlike individual-use AI tools, Vandalizer was designed for institutional scalability without compromising the four pillars for secure and reliable workflows. Vandalizer has enabled significant efficiency gains and strong return on investment (ROI) for various tasks, including evaluating contract compliance, extracting data from agreements, creating proposal checklists, and populating subaward templates. Vandalizers can process 100 documents per minute, showcasing how AI solutions can revolutionize institutional operations while maintaining compliance and security across diverse administrative domains.
The journey from individual AI experimentation to robust institutional implementation requires a technological shift, but also a fundamental reimagining of how an RA enterprise can meet increasing demands without compromising the highest standards of accuracy, security, and compliance. The development and implementation of AI at the institutional level allows for tight integration with core RA infrastructure while simultaneously addressing the limitations of isolated commercial AI tools. Frameworks like Vandalizer demonstrably hold the potential to reduce operating costs and improve efficiency for research administrators and the broader academic community.
For organizations considering institutional AI implementation, the path forward requires the creation and involvement of a community of practice, with collaboration across technical and administrative domains. By collaboratively embracing the four pillars of institutional AI integration, research administrators can transform the technological frontier from a jagged boundary into a clear pathway toward enhanced productivity, improved compliance, better security, and strategic innovation. With each successful integration, we push this frontier further outward to advance institutional goals and create new territory where AI and human expertise harmonize to redefine excellence in the research enterprise. N
References
Dell’Acqua, F., McFowland, E. III, Mollick, E. R., Lifshitz-Assaf, H., Kellogg, K., Rajendran, S., Krayer, L., Candelon, F., & Lakhani, K. R. (2023). Navigating the jagged technological frontier: Field experimental evidence of the effects of AI on knowledge worker productivity and quality (Harvard Business School Technology & Operations Mgt. Unit Working Paper No. 24-013; The Wharton School Research Paper). https://ssrn.com/ abstract=4573321
University of Idaho, AI4RA [2025]: AI for Research Administration, https://ai4ra.uidaho.edu/








Barrie Robison, PhD, is Professor of Biology at the University of Idaho, and Director of the Institute for Interdisciplinary Data Sciences. He has been an active researcher at the University of Idaho for more than 20 years, studying behavioral genomics, evolutionary biology, and data science. He is actively engaged in developing and deploying generative Artificial Intelligence models in multiple higher education contexts. He is Co-PI of the AI4RA (Artificial Intelligence for Research Administration) project. He can be reached at brobison@uidaho.edu.
Nathan Layman is a data scientist at the University of Idaho’s Office of Sponsored Programs where he serves as a bridge between the research administration team, Technology Services, and the Institute for Interdisciplinary Data Sciences. He has a broad computational background and specializes in developing data driven solutions to make research administration tasks easier, more efficient, and accessible. He can be reached at nlayman@uidaho.edu.
Jason Cahoon is a fiction writer in the Creative Writing MFA at the University of Idaho. Recently, his fiction has engaged the sciences and scientific communication through literary forms. Jason also serves as a writing consultant at the University of Idaho Graduate Writing Center.
Luke James Sheneman, PhD, is Director of Research Computing and Data Services in the Institute for Interdisciplinary Data Sciences, University of Idaho. He leads a talented and innovative technical team that works on a wide range of exciting projects, from soil science and AI to supercomputers and black holes. He can be reached at sheneman@uidaho.edu.
Sylvia Bradshaw is the Executive Director of Sponsored Programs at Southern Utah University. She earned her Master’s degree in Research Administration from Johns Hopkins University. Sylvia recognizes she is happiest when learning, a solid requisite for Research Administration. She can be reached at sylviabradshaw@suu.edu.
Katie Gomez Freeman, Assistant Director of Special Projects/ Education at Southern Utah University, specializes in life cycle support and education. A member of the national NCURA Young Professionals and Professional Development committees, she is also a co-contributing editor for the NCURA Magazine Young and New Professionals Track. She can be reached at katiegomez@suu.edu.
Nathan Wiggins is the Data Scientist for the Sponsored Programs, Agreements, Research, and Contracts office at Southern Utah University. He specializes in developing data organization systems, advanced regression analysis, artificial intelligence implementation, mathematical optimization, simulation modeling, and administrative organizational management. He can be reached at nathanwiggins@suu.edu.
Sarah Martonick is the Director of the Office of Sponsored Programs (OSP) at the University of Idaho. In 2024 she and a team of staff and faculty received an NSF GRANTED grant for the AI4RA (Artificial Intelligence for Research Administration) project. She can be reached at smartonick@uidaho.edu.
ORDER YOUR COPY!
Leadership and management are complicated. The “soft stuff” of managing people can be the most challenging aspect. NCURA has a new resource to help!

In this new publication, we discuss the principles of research administration, the art of leadership, the skills of communication, the management of personnel, and the importance of people skills.
CHAPTER 1: THE CHALLENGE OF RESEARCH ADMINISTRATION
CHAPTER 2: THE ART OF LEADERSHIP IN RESEARCH ADMINISTRATION
CHAPTER 3: COMMUNICATION: THE MOST IMPORTANT LEADERSHIP SKILL
CHAPTER 4: CIVILITY AND THE LANGUAGE OF EFFECTIVE LEADERSHIP
CHAPTER 5: KINDNESS: A COROLLARY TO CIVILITY
CHAPTER 6: WORKING WITH DIFFICULT BOSSES
CHAPTER 7: WORKING WITH DIFFICULT EMPLOYEES
CHAPTER 8: FIGHTING BURNOUT
CHAPTER 9: CLIENT SERVICES AND THE FUNDAMENTALS OF RESEARCH ADMINISTRATION
CHAPTER 10: RECRUITING AND RETAINING TALENTED STAFF
By Nada Naser and Paul Lim
Artificial Intelligence (AI) is transforming industries, including research administration, at an unprecedented pace. Research administrators have witnessed its evolution firsthand and recognize it as the next frontier for enhancing basic communication, complex negotiations, and routine administrative and financial support.
Research administrator responsibilities span research compliance, external funding management, and faculty support throughout the research process; this requires navigating intricate processes of grant management, post-award administration, and financial planning. With an increasing volume of research activities and a growing complexity of funding and compliance, AI presents a powerful potential tool to optimize workflows, improve efficiency, and enhance decision-making.
AI in research administration is not about replacing human expertise; it’s about augmenting it. As professionals who oversee grant proposal reviews, sponsor engagement, and compliance monitoring; research administrators know firsthand how overwhelming it can be to manage the routine yet time-consuming aspects of these tasks. This is where AI can be a game changer; not by doing the work for us, but by helping us streamline and refine our processes.
For example, AI can reduce administrative burden by identifying patterns in data, flagging inconsistencies, and providing deadline reminders or compliance reminders. Utilizing AI as a tool ultimately helps free time for us to think critically and make informed decisions that have a real impact. By suggesting how to align submissions with sponsor guidelines and identifying common pitfalls in the proposal process, AI-powered tools can provide feedback on best practices for grant proposal reviews.
AI in research administration is not about replacing human expertise; it’s about augmenting it.
Without automating the entire process, these tools offer suggestions that help research administrators ensure proposals meet requirements efficiently.
By identifying potential risks based on historical data and common issues encountered by other institutions, AI can also provide insights into compliance monitoring. Such feedback allows administrators and principal investigators to address discrepancies early, preventing minor issues from escalating into major problems.
One of the biggest challenges in research administration is ensuring efficient allocation of funds. AI can analyze spending patterns, detect inefficiencies, and highlight areas where resources are underused or misallocated. Managing research budgets and resources effectively is a core responsibility, and AI helps to streamline these processes. Rather than making predictions, AI tools provide valuable guidance by analyzing historical data and identifying patterns. For example, AI can suggest areas where funds have been consistently well-utilized or highlight trends in spending, this allows administrators to make smarter decisions about resource allocation.
AI tools also support financial forecasting by analyzing trends in expenditure, pointing out any potential areas of concern, and recommending adjustments to ensure that the project stays on track. This gives research administrators the ability to spot inefficiencies early, allowing for timely corrections without disrupting the overall research process.
In today’s global research landscape, administrators navigate a complex web of funding sources—from U.S. federal agencies to international and private partners—each with distinct compliance requirements and cost principles. This complex funding environment requires simultaneous mastery of institutional policies, federal regulations, and sponsor guidelines while still providing basic support for principal investigators with varying levels of administrative expertise. Even with unified frameworks like the Uniform Guidance, financial management remains challenging. Institutional variations in facilities and administrative (F&A) costs, combined with diverse project-specific expenses for equipment, compliance, and personnel, create a multifaceted environment where comprehensive oversight demands significant administrative resources.
AI offers powerful tools to enhance administrative capabilities without replacing skilled professionals. Localized large language models (LLMs) and AI Bots (software programs designed for conversational user-directed interactions) have the power to transform how administrators and faculty access critical information. Administrators benefit from immediate AI-generated compliance guidance across multiple policy sources, while researchers receive timely answers about grant requirements and budget availability—even outside business hours. This dual support system ensures compliance while reducing administrative friction for investigators.
• AI Bots have the capability to integrate seamlessly with procurement systems, allowing both administrators and researchers to simply state their needs and receive options from approved vendors with price quotes; this streamlines purchasing workflows into efficient interactions.
• Properly configured AI agents (software programs that execute pre-established tasks with reduced or minimum human intervention) could transform financial oversight and resource allocation by collating data from disparate systems (GMS, ERP, Excel), generating predictive analytics based on historical patterns, and highlighting budgetary concerns before they become critical. These systems identify patterns across large datasets that would be invisible to individual administrators, revealing optimization opportunities and policy conflicts.
• When projects fall behind schedule, AI agents could suggest corrective actions like no-cost extensions while maintaining human decision-making authority. Once administrators make decisions, these AI agents could then execute the approved actions—for example completing NCE applications, submitting budget transfer requests, or processing reallocations—thereby reducing administrative burden while preserving human oversight.
• By analyzing spending behaviors across multiple projects, AI generates insights that help administrators prioritize resources for high-impact initiatives while maintaining compliance.
Some may view AI as a brand-new obscure tool, here to steal our jobs away. One of the authors’ objectives is to analyze why that is the case by asking a few questions. What challenges does the use of AI in our field come along with? AI requires an investment in infrastructure and training to make sure staff are equipped to use these tools effectively. Data security may represent a hurdle to such use and requires vigilant oversight. AI systems often handle sensitive research information, and therefore need to meet strict security standards to ensure compliance with privacy regulations.
Although the above points are challenges, they are surmountable. AI integration creates shared data environments where administrators and researchers across departments can access consistent information and standardized processes, thereby enhancing collaboration. Maintaining data security and privacy is paramount while implementing AI solutions. AI systems must be designed with strong encryption, access controls, and compliance with international and local regulations like FERPA and HIPAA. These safeguards ensure that sensitive research information and financial data remain protected without compromising the analytical benefits of AI. Implementation success depends on institutional context — smaller organizations may target specific pain points — while larger institutions might develop comprehensive strategies. Successful adoption requires realistic timelines, appropriate resources, and stakeholder buy-in through clear communication about how these tools augment human expertise.
One of the most important roles of research administrators is supporting faculty, particularly as they navigate the often-complicated process of applying for research funding. AI-powered tools simplify this application process by providing immediate personalized guidance to the principal investigator for grant applications—answering questions about eligibility, submission guidelines, or deadlines, and other frequently asked questions —allowing them to focus on their research rather than wait for answers from research offices.
AI tools also have the ability to analyze past funding applications and suggest similar opportunities based on a faculty member’s research interests and history. This personalized approach helps researchers find the best-fit funding sources without having to sift through countless options; this saves time and increases the chances of success.
Research administrators support faculty, sponsors, and institutions by ensuring smooth operations within compliance and regulatory guidelines. AI complements this human expertise by enhancing decision-making, optimizing processes, and improving resource allocation.
By leveraging AI strategically, research administrators can focus on high-level strategic decision-making while automating routine tasks. However, AI must be integrated thoughtfully, with input from professionals who understand the nuances of research administration.
The future of research administration will undoubtedly be shaped by AI, but it is key that we continue to bring human insight and judgment to the table. By striking the right balance, institutions can create more efficient and effective environments for both faculty and administrators, where AI serves as a tool for progress rather than a substitute for expertise. N


Nada Naser is the Director of Research Services at Khalifa University in Abu Dhabi, UAE. She has led strategic research initiatives, women’s health programs, and nonprofit collaboration — leveraging her policy background to drive impactful innovation and partnerships in academia. She can be reached at nada.naser@ku.ac.ae.
Paul Lim is Head of Academic Financial Planning and Acting Head of Budget at Abu-Dhabi’s Mohamed Bin Zayed University of AI, overseeing institutional and Provost-level budgeting. His core expertise is in strategic financial planning and supporting organizational growth. He can be reached at Paul.Lim@mbzuai.ac.ae.

By Karen M. Markin
Some faculty who seek to relieve the burden of writing and reviewing grant proposals are looking at artificial intelligence (AI) as a tool to help with those tasks. Research administrators have an opportunity to caution them about the judicious use of this technology. Major federal funding agencies in the United States currently prohibit the use of AI in the peer review process. ChatGPT was released to the general public in December 2022 by Open AI, a San Francisco artificial intelligence company (Roose, 2022). In the following year, both the National Institutes of Health and the National Science Foundation issued notices prohibiting the use of generative artificial intelligence in the merit review process. These models are trained on vast quantities of material to learn the semantic relationships between words (Marr, 2023). The performance of the model depends on the quality of the data used to train it. Training data can include information publicly available from the internet or downloading databases such as Wikipedia (Register of Copyrights, 2025). When users share material with a tool such as
ChatGPT, they lose control over its disclosure. Thus, federal agencies’ key concern about the use of AI concerns the breaching the confidentiality of the proposal under review.
“AI tools have no guarantee of where data are being sent, saved, viewed, or used in the future,” according to the NIH policy (2023). NSF noted that tools may incorporate the information into their datasets and use it to train for future users (NSF, 2023). In response to these concerns, NSF issued a notice in December 2023 stating that “sharing proposal information with generative AI technology via the open internet violates the confidentiality and integrity principles of NSF’s merit review process. Any information uploaded into generative AI tools not behind NSF’s firewall is considered to be entering the public domain” (NSF, 2023).
Similarly, NIH issued a notice in June 2023 titled, “The Use of Generative Artificial Intelligence Technologies is Prohibited for the NIH Peer Review Process,” and it remains in force (NIH, 2023). “The use of generative AI tools to output a peer review critique on a specific grant application…requires substantial and detailed information
inputs. AI tools have no guarantee of where data are being sent, saved, viewed or used in the future,” it states. Reviewers are thus prohibited from using AI tools to analyze and critique grant and contract proposals.
NIH values the expertise and originality of thought that scientists put into their proposal reviews. “We take this issue seriously,” wrote Mike Lauer, former deputy director for extramural research at NIH. “Applicants are trusting us to protect their proprietary, sensitive, and confidential ideas from being given to others who do not have a need to know” (NIH, 2024).
The U.S. Department of Agriculture’s National Institute of Food and Agriculture also prohibits the use of generative AI in peer review. “NIFA cannot protect non-public information disclosed to a third-party generative AI system from being accessed by undisclosed third parties,” according to the agency’s website (2021). “If information from the peer review process is disclosed without authorization through generative AI or otherwise, NIFA loses the ability to protect it from further release. This loss of control creates a significant risk to researchers and their ideas.”
Some observers think this prohibition won’t last, because some AI models work offline and therefore don’t pose problems with confidentiality (Kaiser, 2023). But it’s in place for now, and investigators need to know that.
“We have a crucial role to play in guiding our colleagues through the emerging world of AI.”
Indeed, there is concern internationally about the role of AI in the scientific research enterprise. In guidelines released in April 2025, the European Commission recommended that researchers “refrain from using generative AI tools substantially in sensitive activities that could impact other researchers or organizations (for example, peer review, evaluation of research proposals, etc.)” (European Commission, 2025). This approach will protect unpublished work from potential exposure in an AI model. Down under, the Australian Research Council banned generative AI for peer review after discovering reviews that apparently were written by ChatGPT (Kaiser, 2023).
Regarding the use of AI in proposal preparation, NIH issued a policy July 17, stating it “will not consider applications that are either substantially developed by AI, or contain sections substantially developed by AI, to be original ideas of applicants. If the detection of AI is identified post award, NIH may refer the matter to the Office of Research Integrity to determine whether there is research misconduct” (NIH, 2025).
Agencies that don’t prohibit the use of AI in proposal preparation warn investigators that they will be held responsible for any problems that result from use of the technology. For example, plagiarism, fabrication and falsification are research misconduct for which the principal investigator is responsible, even if it was generated by an AI tool. Hallucinations, or
presentations of false information as true, remain a problem with AI. Some reports indicate they are getting worse (Hsu, 2025).
However, the use of generative AI in grant proposal preparation remains appealing to scientists. A robotics lecturer in the United Kingdom stated what many principal investigators probably think: “I’ve always hated writing grants.” He used AI to write sections of a proposal and edited the results before submitting the document. He said the use of ChatGPT reduced his workload from three days to three hours (Parrilla, 2023).
Scientists are forging ahead with advice on how to use AI to prepare proposals and chatbots to assist with the process. A 2024 article in PLOS Computational Biology discussed how to use large language models (LLMs) such as ChatGPT to assist with grant writing (Seckel, Stephens, & Rodriguez, 2024). They advise against using AI to write a grant. Instead, they suggested using it to evaluate different sections of the proposal. “In our experience, we found that LLMs excel when provided with instructions to narrow down their focus to a specific task or section, which you can achieve by using custom prompts” (Seckel et al., 2024). The authors then recommend that the grant writer fact check everything to guard against the danger of a hallucination. They also recommend using AI-generated text as an inspiration rather than copying it verbatim.
Meanwhile, computer scientists are working to construct a chatbot to assist with the writing of proposals to NSF. A team at North Carolina State University is developing a tool to provide tailored writing templates for each section of an NSF proposal, adhering to agency guidelines (Kasierski & Fagnano, 2024).
AI is making its way into workplaces of all kinds, and universities and laboratories are no exception. Faculty likely will want to use it to reduce what they see as the drudgery associated with grant writing. Research administrators can support them by familiarizing them with agency policies and best practices for avoiding pitfalls. N
References
European Commission. (2025). Living Guidelines on the Responsible Use of Generative AI in Research. https://european-research-area.ec.europa.eu/news/living-guidelines-responsible-usegenerative-ai-research-published
Hsu, J. (2025, May 9). AI hallucinations are getting worse – and they’re here to stay. New Scientist. www.newscientist.com/article/2479545-ai-hallucinations-are-getting-worse-and-theyre-here-to-stay Kaiser, J. (2023, July 2). Funding agencies say no to AI peer review. Science, 381, 261. Kasierski, B., & Fagnano, E. (2024, October 20-22). Optimizing the grant writing process: A framework for creating a grant writing assistant using ChatGPT. SIGDOC ’24, Fairfax, VA, 286-291. Marr, B. (2023, September 19). What Is generative AI: A super-simple explanation anyone can understand. Forbes. www.forbes.com/sites/bernardmarr/2023/09/19/what-is-generative-ai-asuper-simple-explanation-anyone-can-understand
National Institute of Food and Agriculture. (2021). NIFA Peer Review Process for Competitive Grant Applications. www.nifa.usda.gov/nifa-peer-review-process-competitive-grant-applications
National Institutes of Health. (2025). Supporting Fairness and Originality in NIH Research Applications. https://grants.nih.gov/grants/guide/notice-files/NOT-OD-25-132.html
National Institutes of Health. (2024) Frequently Asked Questions: Use of Generative AI in Peer Review. https://grants.nih.gov/faqs#/use-of-generative-ai-in-peer-review.htm
National Institutes of Health. (2023, June 23). The Use of Generative Artificial Intelligence Technologies is Prohibited for the NIH Peer Review Process. https://grants.nih.gov/grants/guide/notice-files/NOT-OD-23-149.html
National Science Foundation. (2023, December 14). Notice to Research Community: Use of Generative Artificial Intelligence Technology in the NSF Merit Review Process. www.nsf.gov/news/notice-to-the-research-community-on-ai
Parrilla, J.M. (2023, November 9). AI takes on grant applications, Nature, 623, 443. Register of Copyrights. (2025). Copyright and Artificial Intelligence: Part 3: Generative AI Training (Pre-publication version). www.copyright.gov/ai/Copyright-and-Artificial-Intelligence-Part-3Generative-AI-Training-Report-Pre-Publication-Version.pdf
Roose, K. (2022, December 5). The Brilliance and Weirdness of ChatGPT. The New York Times. www.nytimes.com
Seckel, E., Stephens, B.Y. & Rodriguez, F. (2024). Ten simple rules to leverage large language models for getting grants. PLOS Computational Biology, 20(93), 1-7. https://doi.org/10.1371/journal.pcbi.1011863

Karen M. Markin, PhD, is the Director of Research Development at the University of Rhode Island. She has been in research administration for more than 20 years. She currently serves as Secretary of NCURA Region I. Karen can be reached at kmarkin@uri.edu.
By Rashonda Harris and Saiqa Anne Qureshi
Artificial Intelligence (AI) is transforming industries worldwide, and research administration is no exception. While some perceive AI as a threat to jobs, norms, and workflows, it is, in reality, an opportunity to rethink how we leverage this powerful tool to support our broader mission.
By automating routine tasks, enhancing decision-making, and improving compliance monitoring, AI offers research administrators new possibilities to increase efficiency and strategic focus. AI has the potential to revolutionize grant proposal reviews, compliance tracking, and predictive analytics, reshaping the way institutions manage research funding and operations (University of Idaho, 2024). However, AI requires human input to navigate the nuances of research administration effectively. While it can enhance compliance and operational efficiencies, human oversight remains essential to ensuring adherence to policies, ethical considerations, and institutional values.
AI can significantly reduce administrative burdens in research administration, particularly in grant proposal reviews and compliance checks. By automating eligibility screening and compliance verification, AI systems can efficiently process applications, allowing administrators to focus on strategic initiatives rather than manual verification (University of Idaho, 2024).

AI-powered tools extract key data from proposals, cross-reference it with funding criteria, and flag potential compliance issues before submission, enhancing accuracy, minimizing errors, and accelerating the review process (Cayuse, 2023).
Furthermore, AI-driven compliance monitoring can detect policy violations in real time, mitigating risks before they escalate. Automated tracking of regulatory requirements ensures deadlines are met, reducing administrative burdens and enhancing institutional accountability (University of Idaho, 2024). However, AI is only as effective as the structure and rules guiding its application. Without proper oversight, it risks amplifying inefficiencies rather than resolving them. As the saying goes, a backhoe is more efficient than a shovel, but if you are digging in the wrong place, you only create a bigger problem—faster.
Predictive analytics, a core AI capability, enables research administrators to make data-driven decisions by forecasting funding success and optimizing resource allocation. AI models analyze historical funding data to predict the likelihood of grant approval, allowing institutions to prioritize highpotential projects (CommunityForce, 2024).
For example, AI can assess trends in faculty success rates, enabling organizations to focus on applications with the highest probability of securing
funding. Additionally, AI-driven insights into spending patterns and funding gaps help institutions allocate resources effectively, ensuring strategic investments in high-impact research areas (Peak Grantmaking, 2024). However, AI-generated data alone is not sufficient—human expertise is required to interpret insights and apply them within institutional and funding contexts.
AI-powered tools, such as virtual assistants and grant-writing aids, are enhancing faculty support in research administration. AI chatbots provide immediate responses to faculty inquiries regarding grants, deadlines, and submission requirements, ensuring 24/7 support that improves faculty experience and reduces administrative bottlenecks (Cayuse, 2023).
Moreover, AI-driven grant-writing assistance automates standard proposal sections and offers tailored recommendations based on funder priorities, streamlining the application process and increasing the likelihood of success (Ignyte Group, 2024). However, AI’s recommendations are based on existing data, which may not capture the unique requirements of specific funding opportunities. Human input remains necessary to refine AI-generated content and ensure alignment with funding agency expectations. Additionally, AI can support resubmissions by analyzing feedback from rejected proposals. By helping applicants adjust and strengthen their proposals based on reviewer comments, AI can provide focused suggestions on areas that need improvement. This type of detailed revision is often difficult to navigate manually, but AI can assist in identifying key areas for refinement while research administrators guide the strategic adjustments.
AI plays a critical role in enhancing compliance and risk management within research administration. By automating real-time risk assessments, AI can identify potential audit risks and financial irregularities, enabling proactive measures to prevent compliance failures (University of Idaho, 2024).
For example, AI can analyze financial transaction data to detect anomalies that may indicate fraud or mismanagement. Additionally, AI automates subrecipient monitoring and reporting processes, ensuring that institutions and their partners comply with regulatory requirements (SRA International, 2023). AI systems can also track changes in federal regulations and funding guidelines, helping institutions maintain compliance. However, AI is only as reliable as the data it is trained on. Research administrators must ensure that AI-driven compliance measures align with institutional risk management strategies and are regularly updated to reflect policy changes.
AI’s Role in Collaboration and Institutional Strategy
AI facilitates interdepartmental collaboration and informs institutional strategy by providing data-driven insights. AI tools enable pre- and post-award teams to coordinate more effectively, fostering transparency and efficiency in research administration (SRA International, 2024).
Additionally, AI supports the visualization of complex data, helping administrators make informed decisions based on patterns and trends. AI-driven benchmarking against peer institutions offers valuable insights into research growth and funding diversification strategies. Institutions that leverage AI effectively can use these insights to refine their research priorities and improve funding acquisition strategies (SRA International, 2024).
However, AI should be viewed as a supplement to human decision-making rather than a replacement. Research administrators must remain engaged in interpreting AI-generated insights and using them to enhance institutional strategies.
While AI offers significant benefits, it also raises ethical concerns that must be addressed. Data privacy and security are critical, as AI systems handle sensitive research information. Institutions must implement robust safeguards
“As we integrate AI into research administration, we must recognize that AI is a tool— not a decision-maker.”
to prevent data breaches and ensure compliance with privacy regulations (SRA International, 2023).
Moreover, AI is not a neutral technology—it reflects the biases embedded in its training data. Bias can emerge from human input, influencing AIdriven decisions in ways that may reinforce inequities in research funding and compliance monitoring. Research administrators must remain impartial and critically evaluate AI-generated recommendations to ensure fair and equitable decision-making. Institutions must work to mitigate AI bias by refining models and ensuring diverse, representative datasets (SRA International, 2023).
As we integrate AI into research administration, we must recognize that AI is a tool—not a decision-maker. Human oversight is essential to ensuring that AI serves institutional goals without compromising fairness, accuracy, or compliance.
The integration of AI into research administration marks a new era of efficiency and innovation. By automating routine processes, enhancing decision-making, and strengthening compliance, AI empowers research administrators to focus on strategic initiatives that advance institutional goals. However, AI needs human input to navigate the complexities of research administration effectively.
As AI continues to evolve, research administrators must embrace a balanced approach—leveraging AI to enhance research administration while maintaining critical human judgment to guide its application. AI is not merely a tool; it is a strategic asset that, when used thoughtfully and responsibly, can transform the future of research administration. N
References Cayuse. (2023). Industry evolution: The impact of technology and AI in research administration. Retrieved from https://cayuse.com/blog/industry-evolution-the-impact-of-technology-and-ai-inresearch-administration
CommunityForce. (2024). Maximizing impact with data-driven grantmaking: Strategies for success. Retrieved from www.communityforce.com/maximizing-impact-with-data-drivengrantmaking-strategies-for-success
Ignyte Group. (2024). AI unleashed: The mastery of predictive analytics in grants management solutions. Retrieved from https://ignytegroup.com/blog/mastery-of-predictive-analytics-ingrants-management-solution
Peak Grantmaking. (2024). Grantmakers embrace the power of predictive analytics. Retrieved from www.peakgrantmaking.org/insights/grantmakers-embrace-power-predictive-analytics SRA International. (2023). AI and research administration. Retrieved from www.srainternational.org/blogs/srai-news/2023/10/10/ai-and-research-administration University of Idaho. (2024). Artificial intelligence for research administration. Retrieved from https://ai4ra.uidaho.edu

Dr. Rashonda Harris, MBA, CRA, is a research administration leader with more than 25 years of experience, specializing in compliance, strategic planning, and DEI. She is the Founder and CEO of Purple Sheep Consulting, a lecturer at Johns Hopkins University, and a past NCURA Board Member. An award-winning mentor and author of Purple Harvest: Planting Goals, Growing Truths, she is dedicated to empowering professionals and advancing research administration. She can be reached at rashonda.harris@roswellpark.org.

Saiqa Anne Qureshi, PhD, MBA, is a sought-after speaker, trainer, editor and writer. She has over 15 years of experience in research management in both the US and Europe. She is an advocate for data driven decision making and is currently an adjunct faculty in research administration at Johns Hopkins University, and additionally teaches in compliance at a university level. She can be reached at Saiqa@pacbell.net.

By Monia W. Desert and Lhakpa Bhuti

Artificial Intelligence (AI) is transforming industries worldwide, and financial research administration is no exception. AI refers to the ability of computer systems to replicate human cognitive functions (Russell & Norvig, 2016). Financial research administration encompasses all activities related to managing the financial aspects of sponsored projects-budgeting, expenditure tracking, financial reporting, compliance, and grant management (Shaklee, 2003).
AI can significantly enhance efficiency by processing vast amounts of financial data quickly and accurately. This capability supports smarter financial management, informed decision-making, process optimization, workforce augmentation, and error reduction (Roy et al., 2025). The integration of AI into finance, commonly known as Fintech (financial technology), has led to groundbreaking innovations such as automated customer support and financial strategies (Arner et al., 2016). AI holds great promise for administrative oversight, regulatory compliance, and strategic financial planning, potentially reshaping how institutions manage financial data.
A study by Pan and Zhang (2024) explored AI’s role in improving financial reporting accuracy through automation. The researchers identified four key benefits:
1. Automation of Routine Tasks – AI-driven systems can streamline manual processes such as data entry, expense tracking, and account reconciliation. Traditionally, these tasks are labor-intensive and prone to human error, but AI automates repetitive functions, saving time and resources while enhancing security and real-time risk assessment. AI-powered financial management tools can integrate with existing accounting systems, ensuring seamless data synchronization and reducing the likelihood of discrepancies.
2. Improved Data Accuracy – AI-powered algorithms analyze large datasets to ensure consistency and precision in financial records. These tools can extract and categorize financial data from invoices and receipts, eliminating the need for manual input while identifying discrepancies for further review. AI can also enhance forecasting models, allowing institutions to predict financial trends and allocate resources more effectively
3. Fraud Detection and Compliance – AI systems strengthen compliance by cross-referencing financial transactions with federal regulations and institutional policies. Non-compliance can result in financial penalties, reputational damage, and funding loss, making AI’s ability to automatically detect anomalies and potential fraud invaluable. AI-driven fraud detection
"As research institutions continue to manage increasingly complex financial portfolios, AI presents an opportunity to streamline operations, enhance compliance, and improve strategic decision-making."
systems use machine learning to identify suspicious patterns in financial transactions, flagging irregularities for further investigation.
4. Challenges and Risks – AI relies on large datasets, requiring strong cybersecurity protections to prevent breaches. Additionally, algorithmic
bias may impact financial decision-making if AI models are trained on incomplete or skewed datasets. While AI enhances efficiency, human oversight remains essential to interpret insights and ensure ethical financial governance. Institutions must implement robust data governance frameworks to mitigate risks associated with AI-driven financial management.
AI can streamline grant management, optimize compliance reviews, and improve risk mitigation. Institutions managing multiple grants must navigate complex reporting requirements, and AI offers a solution by enabling real-time expenditure tracking. Automated systems ensure that spending aligns with funding agency conditions, reducing administrative burdens and allowing research administrators to focus on strategic priorities.
A key application of AI could be expenditure compliance assessments. Federal regulations, such as those outlined in the Code of Federal Regulations cost principles, dictate allowable expenses for federally funded projects. AIdriven compliance tools could cross-check expenditures with these principles, flagging exceptions for human review. AI can also assist in automating grant proposal evaluations, ensuring that funding applications meet eligibility criteria and align with institutional priorities.
"Ultimately, AI’s role in financial research administration will depend on how institutions leverage its capabilities while maintaining ethical and strategic oversight."
with significant repercussions (Oyeniyi et al., 2024). Furthermore, AI is limited in addressing ethical considerations related to grant allocation, as it cannot accurately assess the moral weight of funding decisions or broader societal impacts. In these cases, human reasoning must take precedence.
AI also struggles with contextual decision-making, particularly in cases where financial expenditures fall into ambiguous categories. For example, AI may flag an expense as non-compliant based on predefined rules, but human administrators may recognize its necessity within the broader scope of a research project. This highlights the importance of human oversight in interpreting AI-generated insights.
Additionally, AI’s role in financial research administration must be continuously evaluated and refined. Institutions must establish clear ethical guidelines for AI implementation, ensuring that automated systems align with organizational values and regulatory requirements. AI should be viewed as a collaborative tool rather than a replacement for human expertise (RTS, 2024).
As AI continues to evolve, financial research administration must strike a balance between automation-driven efficiency and human oversight in regulatory compliance, financial strategy, and ethical decision-making (GAO, 2025). Institutions that successfully integrate AI while preserving human expertise will be well-positioned for long-term success in an increasingly complex financial research landscape.
Future advancements in AI may include adaptive learning models, which refine financial management strategies based on real-time data analysis. AIdriven blockchain technology could further enhance financial transparency, ensuring that research funding is allocated and utilized with maximum accountability. Ultimately, AI’s role in financial research administration will depend on how institutions leverage its capabilities while maintaining ethical and strategic oversight. By embracing AI as a powerful tool for efficiency and compliance, research administrators can optimize financial processes, reduce risks, and drive innovation in financial management. N
References
Arner, D. W., Barberis, J., & Buckley, R. P. (2016). FinTech, RegTech, and the reconceptualization of financial regulation. Northwestern Journal of International Law & Business, 37(3), 371–413. Oyeniyi, L. D., Ugochukwu, C. E., & Mhlongo, N. Z. (2024). The influence of AI on financial reporting quality: A critical review and analysis. World Journal of Advanced Research and Reviews, 22, (1), 679–694. https://doi.org/10.30574/wjarr.2024.22.1.1157
Pan, H., & Zhang, Z. (2024). Research on automation and data accuracy of financial reporting driven by artificial intelligence. Financial Engineering and Risk Management, 7(6). Clausius Scientific Press. https://doi.org/10.23977/ferm.2024.07062
Roy, P., Ghose, B., Singh, P. K., Tyagi, P. K., Vasudevan, A. (2025). Artificial intelligence and finance: A bibliometric review on the trends, influences, and research directions. F1000Research, 14, 122. https://doi.org/10.12688/f1000research.160959.
RTS Labs. (2024, October 21). How AI is enhancing decision-making and efficiency in finance. https://rtslabs.com/ai-enhancing-decision-making-and-efficiency-infinance#:~:text=AI%20enhances%20decision%2Dmaking%20by,credit%20scoring%2C%20and %20financial%20forecasting.
Russell, S., & Norvig, P. (2016). Artificial intelligence: A modern approach. Pearson Education. Shaklee, T. (2003). Research administration in the United States. In S. Kerridge, S. Poli, & M. YangYoshihara (Eds.), The Emerald handbook of research management and administration around the world (pp. 473–481). Emerald Publishing Limited. https://doi.org/10.1108/978-1-80382701-820231040
U.S. Government Accountability Office (GAO). (2025). Artificial intelligence: Use and oversight in financial services. GAO Report 25-107197.
AI-driven predictive analytics could further strengthen policy adherence by examining historical spending patterns. If a specific cost category routinely exceeds allowable thresholds, AI could alert administrators, prompting corrective action before a compliance issue arises. The operational advantages of AI are clear automation minimizes administrative workload, empowering financial research administrators to concentrate on strategic planning.
Despite AI’s potential, human judgment remains indispensable. AI relies on historical financial data, meaning biased or incomplete datasets could produce flawed conclusions—potentially influencing financial decisions


Monia W. Desert is a Program Assistant within the Center for Clinical Research Advancement at Boston Medical Center. Monia can be reached at monia.desert@bmc.org.
Lhakpa Bhuti is a Grant Administrator/Finance and Operations Manager for the Center for Integration Science – Division of Global Health Equity at Brigham and Women’s Hospital – Department of Medicine. Lhakpa can be reached at lhakpa_bhuti@post.harvard.edu.

Indianapolis, IN
NCURA’S VIRTUAL WORKSHOPS OFFER LIVE INTERACTIVE, EXPERT-LED SESSIONS:
September 9-10, 2025 | November 12-13, 2025
Level I: Fundamentals of Sponsored Projects Administration - Part 1: Pre-Award
September 30-October 1, 2025 | November 12-13, 2025
Level I: Fundamentals of Sponsored Projects Administration - Part 2: Post-Award
October 6-9, 2025
Level II: Sponsored Projects Administration
December 1-4, 2025
Contract Negotiation & Administration
INDIANAPOLIS, IN
September 15-17, 2025
Level I: Fundamentals of Sponsored Projects Administration
Financial Research Administration
Contract Negotiation & Administration

For current information on open workshop registrations, please visit our website here.
LOOKING TO FOSTER MORE COLLABORATION AND ENGAGEMENT WITH YOUR TEAM?
NCURA is pleased to o昀er in-person and virtual On-Campus Traveling Workshop trainings. Visit our On-Campus website here or email Gabby Hughes, hughes@ncura.edu to learn more today!
By Samantha Westcott
As a peer reviewer, it is a privilege to learn about the work of research administrators and identify notable practices in their institutions. In peer review assessments, commonly recognized practices include professional development, kickoff meetings, strong documentation, effective training, comprehensive roles and responsibilities matrices, and excellent communication. These practices are central to the profession and continuously refined across the profession of research administration.
However, one of the most enriching aspects of peer assessments lies in discovering the less obvious yet highly impactful practices that reflect the unique character of an institution: its culture, environment, and leadership philosophy. These underrecognized practices are often the “secret sauce” that differentiates a productive, supportive research environment. Sharing these, while respecting confidentiality, can spark reflection and inspire adoption at other institutions.
Hidden Strength: Proactive Faculty Support Structures
“Established mechanisms for chain of command communication exist where faculty can voice concerns and provide input regarding challenges in meeting sponsored program requirements.”
Faculty often feel lost when planning a project that includes challenging sponsor requirements. Institutions may be so challenged with workloads and staffing challenges that the PI has no resource within research administration to support these efforts. However, lacking the means to be able to address the requirements in a proposal or carry out requirements during the life of the project is detrimental to the institution and can often lead to unintended consequences. Recognized channels serving the faculty to reach out and seek help, connecting them to resources in research administration, is an underrecognized notable practice.
This notable practice represents a proactive feedback loop. The presence of a defined chain of command not only empowers faculty but also surfaces systemic issues that can be addressed more broadly. Empowering faculty with defined communication pathways creates a culture of accountability and trust, ensuring that research challenges are addressed early and institutionally
Takeaway question: Do our faculty know how to ask for help? Are those who can help listening?
Hidden Strength: Integrating Research Growth into Strategic Planning
“After achieving XX status, the university began addressing the need for research infrastructure. This vision was incorporated into the university’s strategic plan, which included the development of both personnel and systems to support increased research activity.”
Conscientiously and strategically planning for future evolution in research administration when the evolution of research is noted is critical and so often misunderstood or taken for granted. Making strategic decisions about how to keep pace with the expanding and recognized status as well as determining the capacity to grow further is definitely an underrecognized notable practice. This notable practice reflects an intentional alignment between research ambitions and institutional planning, which is something often missing or delayed in fastgrowing environments. Institutions might celebrate growth in sponsored research but fail to invest in the foundational support structures needed to sustain it. This institution showed itself to be forward-thinking where strategic foresight informs operational capacity.
Takeaway question: Do we treat research administration as a key pillar in our intuitional growth strategy? Do we treat research administration as a reactive function to our growth?
Hidden Strength: Bridging Advancement and Research Administration
“The Director and University Advancement maintain a strong working relationship. A strong working relationship between research administration and fundraising is essential for the research enterprise.”
Conflict between research administration and fundraising (often housed in advancement or development offices) frequently stems from misaligned goals, communication gaps, and unclear institutional boundaries. Research administration operates within a framework of strict compliance rules, federal regulations, and specific deliverables, while advancement is often driven by relationship-based fundraising goals and flexible gift arrangements. These fundamental differences are further complicated by divergent success metrics, such as dollars raised through philanthropic gifts versus competitive research proposals awarded. When coordination between these offices is lacking, these misalignments can deepen, leading to missed opportunities, inefficiencies, or even conflicts.
This notable practice is a powerful example of cross-functional collaboration that is often undervalued. Advancement and research offices may operate in silos, but aligning these functions can unlock new funding opportunities, particularly in foundation relations, donor-driven research, and strategic partnerships. This work highlights a practice that fosters shared goals and mutual support.
Takeaway question: How well do our advancement and research offices collaborate? Are there mechanisms to foster joint planning and communication?
The strength of a research enterprise is not only built on formal policies and standard practices, it is also shaped by the less visible, highly intentional efforts that reflect institutional values and culture. These underrecognized practices, like proactively supporting faculty, aligning research growth with strategic planning, and fostering collaboration across traditionally siloed units may not always make headlines, but they play a crucial role in creating thriving, resilient research environments.
As we continue to engage in peer review and professional dialogue, we must look beyond the obvious and ask ourselves: What quiet strengths are already shaping success in our institutions? What might we learn and adopt by paying closer attention to the practices that don’t always get recognition?
These hidden strengths deserve to be seen, shared, and celebrated. N

Samantha Westcott is a member of the NCURA Select Committee on Peer Programs/Peer Review. She is also a peer reviewer. Sam has more than 30 years of experience in research administration. She currently serves as the Assistant Director for Sponsored Programs Accounting at the University of Wisconsin-Milwaukee. She can be reached at westcots@uwm.edu.
Whether you work at a research institution or a predominantly undergraduate institution, the importance of providing quality services to your faculty in support of their research and scholarship is undeniable NCURA offers a number of programs to assist your research administration operations and to ensure a high-quality infrastructure that supports your faculty and protects the institution.
Please contact NCURA Peer Programs: NCURA Peer Advisory Services and NCURA Peer Review Program at peerreview@ncura.edu
By Charles Barnett, Youyou Cheng, and Michael Jarosz
Yale University strives to be a pioneer in Artificial Intelligence (AI), embracing AI tools such as Large Language Models (LLMs) to enhance learning, innovation, and productivity. Last year, Yale announced a $150 million commitment to support faculty, students, and staff as they engage with AI. In addition to significant investments to support research, Yale also developed the Clarity Platform, which offers all members of the Yale community secure access to LLMs from organizations such as OpenAI and Anthropic.
In addition to utilizing AI to support research and education, we are also embracing AI in administrative operations. For example, in our Finance function, we are deploying AI to enhance compliance, boost efficiency, improve customer service, navigate complexity, and support decision-making. One example is our Finance Chatbot – the product of strong partnerships between our Finance and IT functions and early adopters in several units such as Yale School of Medicine (YSM) and Yale Law School.
Accessible to all members of the Yale community, the Finance Chatbot leverages AI to quickly and accurately locate the answer to user questions from among thousands of pages of information such as policies, procedures, forms, and FAQ documents. As one Operations Manager noted, “The Finance Chatbot makes finding policy answers a breeze!...

I can say this functionality has drastically reduced the time it takes to get accurate answers to everyday questions.”
“As we continue to innovate and expand our AI initiatives, we look forward to unlocking further benefits of AI in Finance, Research Administration, and other functions to further enhance customer service, effectiveness, and efficiency.”
When answering user questions, the Finance Chatbot provides targeted citations, links to relevant resources, and helpful contact information. These features have made the Finance Chatbot a helpful resource for recent Yale hires and long-tenured staff alike.
To identify opportunities to improve the Finance Chatbot and validate enhancements, we leveraged three kinds of testing:
1. Central function subject matter expert testing (e.g., Sponsored Projects Finance Administration team)
2. Early adopter testing
3. AI automated testing >>
Implementation of artificial intelligence (AI) has the potential to advance and improve mechanisms to protect an institution, its researchers, and their research during a time of heightened scrutiny from federal agencies. However, AI carries the risk of inaccurate, misleading, or biased responses. Successful, risk-based implementation relies on a balance between efficiency gains and on-the-ground realities. Here are several strategies for assessing what AI can do and the risks involved in having it do it.
Select a single, clear goal to keep task improvement on track. Identify a targeted improvement and assess which tools are available for tasks with sensitive data.
Design a Prompt with Transparent Reasoning AI assistants (such as ChatGPT, Gemini, and CoPilot) require a quality prompt to produce a quality output. Provide the AI assistant an example of the desired result to receive a higher quality output. Continuously evaluate, adapt, and update prompts to improve responses and reduce errors. Start new chats with the AI assistant because multiple reviews in the same chat could lead to false or confused determinations.
Keep Human Oversight Central
Use a pilot period in which the team submits the prompt to the AI assistant and conducts the task as they usually would. This allows for the comparison of conclusions. Adapt the prompt if flawed reasoning is identified. AI will expedite tasks in the long term, but initial implementation will require double the work.
Assess Risk and Benefits
The benefit of using AI depends on an institution’s resources and risk appetite. AI can help stretch staff capacity by reducing staff time spent on tasks. Start with small-scale pilots. Develop policies. Monitor AI performance and validate outputs. Utilize unbiased data that does not suggest an answer.
Think Ahead
We are still in the early days of integrating AI into our processes. With more testing and refinement, output can be received with less skepticism and strike the balance of effectively meeting new and evolving regulations with increased compliance scrutiny. N

Leah Gessel is Conflict of Interest and Conflict of Commitment Administrator at the University of Oregon. With a professional focus on artificial intelligence, compliance, ethics, conflicts of interest, and research integrity, Leah supports the development and enforcement of policies that uphold transparency and accountability. She works with stakeholders across campus to ensure alignment between personal and professional responsibilities within the institution. Leah can be reached at lgessel@uoregon.edu.
AI automated testing entails posing hundreds of common questions to the Finance Chatbot such as “How can I obtain gift cards to remunerate study subject participants?” followed by four multiple choice answers – including the correct answer (“Create a Spend Authorization”) and three distractor, incorrect answers (e.g., “Purchase them with your Pcard”). The Finance Chatbot’s performance can then be evaluated by a different AI, which calculates how many questions the Finance Chatbot answered correctly relative to the benchmark’s reference answers.
In addition to helping users quickly navigate policies and procedures, the Finance Chatbot has yielded several additional benefits such as:
• Platform Enhancements: Insights gained about chatbot infrastructure and design have benefited Yale’s broader chatbot infrastructure
• Content Improvements: When the Finance Chatbot struggles to answer a question, it may signal an opportunity to improve the clarity of the source content
• AI Literacy: Through numerous information sessions and demonstrations, we have used the Finance Chatbot as an opportunity to further strengthen understanding and comfort with Generative AI tools with members of the Yale community
In the future, we plan to further refine the Finance Chatbot by incorporating additional features such as user profiles to facilitate more tailored responses. The Finance Chatbot project has also surfaced several promising, related AI use cases, such as using AI to review cost transfers, accelerate policy drafting and review, and identify opportunities for further policy refinements and training based on frequently asked questions.
Conclusion
Our experience with the Finance Chatbot and other early AI deployments at Yale has highlighted AI’s potential to revolutionize administrative operations. Building on these successes, we are excited about the future possibilities AI offers across various functions at Yale. As we continue to innovate and expand our AI initiatives, we look forward to unlocking further benefits of AI in Finance, Research Administration, and other functions to further enhance customer service, effectiveness, and efficiency. N



Charles Barnett serves as Continuous Improvement and Transformation Director in Yale University’s Controller’s Office, where he leads initiatives to leverage AI to improve processes and enhance decision-making. He has delivered numerous educational sessions on AI for staff at Yale and peer institutions. Charles earned his BA and MS from Columbia University and MBA from UNC Chapel Hill, and he holds the Chartered Financial Analyst designation. He can be reached at charles.barnett@yale.edu.
Youyou Cheng is the Deputy Director of Sponsored Projects Financial Administration at Yale University. She has 20 years of experience in pre- and post-award management, having held leadership roles in departmental, college, and central research offices across both private and public higher education institutions. She can be reached at youyou.cheng@yale.edu.
Michael Jarosz is Director of University Policy at Yale, overseeing administrative policy, financial compliance, and strategic initiatives, including AI integration. With a background in law and higher education, he brings expertise in governance, risk, and operations. He holds a JD, MSA, and BA from SUNY Buffalo, Illinois, and Boston College. He can be reached at michael.jarosz@yale.edu.
Available for immediate download through NCURA’s Online Learning Management System.



UNDERSTANDING AND MANAGING SPONSORED PROGRAM ADMINISTRATION AT PREDOMINANTLY UNDERGRADUATE INSTITUTIONS
This PDF resource introduces the framework, key concepts, and practices in effectively managing sponsored programs at Predominantly Undergraduate Institutions (PUIs). Topics include:
• Organizational Models and Structures
• Roles and Responsibilities
• Regulatory Compliance Requirements
• Pre-Award Services
• Post-Award Support Services
For details and to purchase visit https://onlinelearning.ncura.edu/read-and-explore
This publication introduces foundational concepts and best practices in sponsored project budget development. Topics include:
• Sponsor Requirements
• Allowable Direct and Indirect Costs on Federal Awards
• Cost Sharing
• Budget Justification
• Salary Limitations
• Types of Budgets
• Requirements and Nuances for Federal and Non-Federal Awards
• Sample Calculations and Explanations GET IT HERE! https://onlinelearning.ncura.edu/read-and-explore


Take 5 minutes a day for your professional development –be informed and inspired with new articles posted daily to the NCURA App!
Reading is, and always has been, the habit of the highly successful.


Available for iOS and Android devices! “NCURA” in your app store or use the QR code to take you directly there!


By Kathryn Cavanaugh
Research compliance is critical to maintaining ethical standards and adhering to federal requirements. The need to balance rigorous compliance with operational efficiency, however, has long been a challenge. To avoid risk, institutions sometimes impose stringent internal policies that exceed regulatory requirements. This excessive caution encumbers their processes with unnecessary self-imposed administrative burden.
Administrative burden, for this article, refers to the internal processes and tasks an institution requires to manage, oversee, and support the operations of the research program. To reduce administrative burden when planning
such processes, it is essential to consider:
1. What is required by federal regulations?
2. How can operations be streamlined to meet such requirements?
3. What preventive measures may be proactively employed to mitigate the occurrence of noncompliance?
Artfully balancing the reduction of unnecessary self-imposed burden while focusing on proactive compliance requires a strategic approach that aligns regulatory obligations with institutional goals. Figure 1, adapted from Haywood and Greene (2008) and expanded to address all areas of research compliance,
illustrates the strategic decision-making process that is involved in assessing administrative burden. If an activity is not required by federal regulations, evaluating whether the benefit of keeping the process outweighs the burden of completing the task is critical. For example, enhanced safety for human subjects, research animals, or research personnel might provide a benefit that outweighs the administrative cost of implementation. On the other hand, employing practices that exceed regulatory requirements without any discernable positive impact to safety or welfare, could be a key indicator that the process may represent excessive administrative burden.

Examples of such excessive self-imposed burden include:
• Requiring your research compliance committee to complete annual reviews of projects that do not require an annual assessment. Requiring the Institutional Review Board (IRB) to complete annual reviews for exempt or expedited Human Subjects studies or requiring the Institutional Animal Care and Use Committee (IACUC) to complete annual reviews for any research activity regardless of animal use, increases burden.
• Applying federal requirements to activities that do not fall under the scope of the federal regulation. For example, “checking the box” in a Federal Wide Assurance form with the Health and Human Services Office of Human Research Protections that applies 45 CFR 46 (Protection of Human Subjects, 2018) requirements to human subjects research not subject to the Common Rule (Kelly, 2024); or applying requirements of the Public Health Service Policy on Humane Care and Use of Laboratory Animals (U.S. Department of Health and Human Services/National Institutes of Health, 2015), including reporting, to all animal research activities is burdensome.
• Taking a blanket approach to post-approval monitoring and applying the same monitoring method and frequency for all projects regardless of their activity status. To maximize postapproval monitoring efforts while minimizing burden, institutions could adopt a risk-based approach and perform targeted assessments for active research studies, prioritizing those that represent high-risk research activities (Pritt and Smith, 2020).
Burden may also occur in the institution of proactive operational practices intended to prevent the occurrence of noncompliance. Examples of such preventative measures may include:
• Maintaining metrics for the types of noncompliance and/or inspection deficiencies to identify trending issues that may be mitigated through improved training or heightened post-approval monitoring.
• Engaging in succession planning and standardized compliance onboarding of committee members to maintain continuity of committee knowledge and operations.
• Training of new Principal Investigators on how to compose their protocol according to committee expectations and educating them on how to avoid common mistakes that may lead to noncompliance.
“Artfully balancing the reduction of unnecessary self-imposed burden while focusing on proactive compliance requires a strategic approach that aligns regulatory obligations with institutional goals.”
• Maintaining consistent communication channels with researchers, such as a department website, a listserv for mass communication, or regular office hours for targeted assistance.
When evaluating such risk preventative practices, it is essential to consider if utilizing a given process will result in a decrease in noncompliance over time and thus a net reduction in programmatic burden.
Skillfully assessing administrative burden and streamlining operations in research compliance is a necessary step toward ensuring that research institutions remain agile, efficient, and focused on advancing their research mission. By reevaluating overly stringent policies, minimizing redundant reviews, and adopting a risk-based approach to decision-making, institutions can streamline compliance workflows without compromising ethical standards. Ultimately, a more balanced and strategic approach to compliance will satisfy the regulatory obligations while simultaneously building a proactive culture of compliance. N
References
Haywood, J. R. & Greene, M. (2008). Avoiding an overzealous approach: a perspective on regulatory burden. ILAR Journal. 49(4), 426-34. doi:10.1093/ilar.49.4.426. https://doi.org/10.1093/ilar.49.4.426
Kelly, C. M. (2024). Reporting to OHRP: The implications of “checking the box” on your Federalwide Assurance. NCURA Magazine, 56(2), 66. www.ncura.edu/Portals/0/Docs/Magazine/2024/NCURAMagazine_MarApr2024.pdf
Office of the Director, National Institutes of Health (2024). NOT-OD-24-075: Guidance on Flexibilities for Conducting Semiannual Inspections of Animal Facilities. https://grants.nih.gov/grants/guide/notice-files/NOT-OD-24-075.html
Pritt, S. L., & Smith, T. M. (2020). Institutional Animal Care and Use Committee postapproval monitoring programs: A proposed comprehensive classification scheme. Journal of the American Association for Laboratory Animal Science, 59(2), 127–131. https://doi.org/10.30802/AALAS-JAALAS-19-000096
Protection of Human Subjects. 45 C.F.R. § 46 (2018). www.hhs.gov/ohrp/regulations-andpolicy/regulations/45-cfr-46/index.html.
U.S. Department of Health and Human Services/National Institutes of Health (2015). Public Health Service Policy on Humane Care and Use of Laboratory Animals. https://olaw.nih.gov/policies-laws/phs-policy.htm.
• Failing to implement regulatory flexibilities out of a fear that employing a more flexible approach will not satisfy regulatory requirements. The ability to use as few as one qualified individual (who need not be an IACUC member or institutional employee) to conduct facility inspections for non-Animal Welfare Act-regulated-animals (Office of the Director, National Institutes of Health, 2024; U.S. Department of Health and Human Services/National Institutes of Health, 2015 [fn 8]) is a commonly underutilized flexibility.

Kathryn Cavanagh is a certified professional in IACUC administration (CPIA) with a B.S. in Animal Science from UC Davis. She has over a decade of experience in research compliance and currently serves as the Associate Director for Research Compliance Training & Development at the Texas A&M University System. Kathryn has presented at national conferences including PRIM&R, AALAS, NCURA, and SRAI, and has published multiple articles on IACUC operations and compliance. Kathryn can be reached at kcavanaugh@tamu.edu.
By Milena Arsenijevic
Data strategy is crucial to enable digital and AI-driven transformation, particularly as AI models require large data sets to train on. However, research management functions often struggle to build data foundations due to their reliance on documents and manual processes. To enable AI transformation, research organisations need to critically examine their digital maturity, starting with data. If done well, the opportunities of integrated data and AI strategy are significant with potential to uplift research management as a new, untapped source of business intelligence and competitive advantage for research organizations.
Research managers are very familiar with the challenges of streamlining information in complex research organizations. Data is often siloed between business units, stored in excel spreadsheets and PDF documents, and not effectively linked in a relational database. Indeed, a research manager’s skill lies in creating order from this “administrative chaos,” working across teams, funders, and research fields to effectively deliver projects despite the barriers to digital processes and data sharing. Yet the longer research management functions work in this way, the more delayed the benefits will be from AI and digital transformation.
Consider the optimal state of a digitally mature research management function; administration and file-keeping of research projects are no longer overseen by one or two staff members, but rather a digital record that can be collaborated, stored, transmitted, and analysed by professional and academic users alike at any stage of the project. Research management documents such as grant agreements, funding proposals, progress reports, and ethics approvals can be converted into machine-readable, standardised, and streamlined artefacts. The data held within these artefacts can be used to build visual data dashboards with real-time updates, used to inform evidence-based KPIs and develop AI bot assistants that can address routine queries. Teams across the university would be able to access and view research management information, making visible the university’s research impact and needs.
Research managers can look to leading examples in student education and teaching as an example of AI transformation. Australia's Macquarie University developed Virtual Peer, an AI chatbot that provides 24/7 student support using a knowledge base of lecture materials and university policies (Microsoft, 2025). During testing, usage spiked before exams, particularly on weekends, revealing that students need help most outside normal business hours when traditional support isn't available. A similar AI assistant could support service-focused research management functions like proposal submissions, finance, or contract queries.
There are, of course, some risks and considerations when embarking on a data strategy for AI-enabled research management. Caution should be applied to historical research funding data, which may reveal biases in successful funding awards based on gender, field, and career stage, rather than reflecting to contemporary equity guidelines. Changes to professional

staff responsibilities could generate concerns about being “replaceable” (Nordling, 2023) – a real concern which should be balanced by organisations need to have sustainable professional staff workforce levels proportionate to academic staff. Values alignment should occur between an organizations digital strategy and research culture strategy, wherein a shared vision of equity and support can guide workforce transition and maintain trustworthy professional-academic relationships—indeed, partnership, trust and collaboration are core skills required to connect the university's diverse activities and are skills that cannot easily by replaced by digital technology.
“To enable AI transformation, research organisations need to critically examine their digital maturity, starting with data.”
Data strategy is more than technology. The pathway to an AI-enabled research management profession integrates data development alongside people, technology, and research culture. To paraphrase Prof. Alan Brown in his article “5 patterns of digitization in large established organizations” (Brown, 2022), by replacing paper documents into digital assets, and having a data-driven processes, we will begin to shift view of research management as a digital product. People will need to be supported as the skill needs and responsibilities of research management change as processes are automated. As digital maturity and AI integration advance advances, we can start to shift the Value Paradigm of research management–allowing research managers to define what our profession looks like, and our value add in a truly digital, AI-supported research experience. N
References
Brown, A. (2022). The 5 patterns of digitization in large established organizations. https://digit.ac.uk/opinion/the-5-patterns-of-digitization-in-large-established-organizations Microsoft. (2025, March 24). Macquarie University students' exam scores up by nearly 10 per cent thanks to new AI-powered chatbot. Microsoft News. https://news.microsoft.com/source/asia/2025/03/24/macquarie-university-students-examscores-up-by-nearly-10-per-cent-thanks-to-new-ai-powered-chatbot
Nordling, L. (2023). How Research Managers are using AI to get ahead. Nature Index. www.nature.com/articles/d41586-023-04160-6

Milena Arsenijevic is a Senior Manager within the Major Initiatives team of the University of Queensland Research Office. With a background in public policy, innovation and communications, Milena identifies and builds collaborative research programs that seek to address national and global missions. Milena holds a Bachelor of Arts from the University of Queensland and a Master of Public Affairs from Sciences Po Paris. She is currently pursuing studies in Business Analytics and Machine Learning. She can be reached at m.arsenijevic@uq.edu.au.
By Abby Guillory, Lee Broxton, and Tolise Dailey
ave you ever had a great training idea but struggled to plan the topics or draw up an agenda due to limited time? Have you struggled to craft a compelling description for a training session that will attract participants rather than deter them? We all have. Whether you have a license to Microsoft’s Copilot or you’re using OpenAI’s ChatGPT, many types of artificial intelligence (AI) can save time and effort in your training development. It can’t replace our expertise in the nuances of research administration, but it can provide a great starting point for training preparation.
Benefits
With limited bandwidth available to consider the numerous training topics and ideas we all have, AI platforms can provide a foundation, helping us to avoid starting from scratch. The platforms can search existing training within your institution (if you have a university license) or search websites and training publicly available on the internet to help in the development (or redevelopment) of concepts to cover during a training session
The time it takes to come up with an engaging title, description, and outline can be lengthy. With AI platforms, we can adjust the items we’ve written to make them sound more professional, more friendly, or more engaging. AI can also develop these pieces for us and then allow for necessary revisions and adjustments to fit our unique university or the specific topics we want to cover with our intended audience. AI can also modify existing content to meet the needs of a new audience. This can be completed with most AI platforms, such as ChatGPT or Microsoft CoPilot, simply by providing the content you’ve already prepared and prompting AI to redevelop it for a new audience.
The benefit of using AI isn’t only for high-level outlines, but can also provide us with additional details to cover. One beneficial feature is the ability to generate speaker notes, which can serve as a script or talking points for trainers, making it easier to deliver a smooth and effective presentation, ensuring consistency and clarity in how our content is delivered. Microsoft
CoPilot does an excellent job of preparing the speaker notes for you, but other readily available AI platforms can also provide this assistance. The days of bland, information-packed training sessions are over. AI is helping to lead that transformation. Today’s learners typically expect more than just slides and lectures; they want to be engaged, challenged, and involved. AI tools can help meet those expectations by generating interactive elements tailored to your training content, thereby enhancing the learning experience. By integrating AI-generated engagement, we can shift from passive information delivery to active learning experiences that resonate with our audience. To complete this type of activity, provide the content you or the
“While leveraging AI in training development can be beneficial for research administrators, as with all AI, accuracy is always a concern.”
AI has prepared on a topic and request activities, such as quizzes or simple games, to ensure understanding with the audience. Some AI platforms, such as Microsoft Copilot, may ask you if you want to include these automatically. Selecting relevant graphics and images to include in a presentation can be time consuming. Available AI platforms can provide descriptions of what a good image might look like for a slide or with the right prompt. They can even create the entire slide deck with images for you. Presentation generators can take a process that once may have consumed hours of working time and complete the task in mere seconds. It allows us to focus more on the content learners need to be equipped with rather than on the tedious nature of administrative tasks. There are several AI platforms available, both free and paid versions, to assist in the development of slides. PopAI, Slidesgo, and even Canva are all platforms that can help.
While leveraging AI in training development can be beneficial for research administrators, as with all AI, accuracy is always a concern. The assistance does not erase the need for our expertise and review. After developing the training, ensure you review what has been provided, looking for gaps in the concepts you plan to cover, examining the use of jargon, and considering any updates that may be needed due to new or upcoming regulations or policy changes.
Have you tried prompt after prompt with little luck? There’s an art to creating prompts that are useful to AI, and sometimes, what you’ve requested the AI platform to develop misses the mark. AI may have a limited understanding of a concept or contextualize it in a way that differs from the intended meaning. When this happens, it can take longer to correct the prompt than it would to revise the information provided or develop your own. In this situation,
consider adjusting the information you’ve been given to meet your needs. A longer-term solution is to learn how prompts work and how to ensure the AI platform meets your needs. Consider completing training through LinkedIn Learning or other websites to help.
The creation of slides can be a big timesaver, but before stepping out to do that presentation in front of a group of eager faculty or administrators, be sure the slides are accurate and reflective of the topics you plan to discuss. Review the images to ensure they are relevant to the topic and aren’t distracting for the audience or improperly placed. It’s also worth noting that required templates may impact the creation of slides.
For example, if your university requires branded slides or has specific branding guidelines that you must adhere to, consider these guidelines when reviewing your slide deck. AI can easily miss some of the standards.
Incorporating AI into our training development is all about enhancing our expertise, not replacing it, and it’s a win-win for our profession and your university. From saving time on outline and slide development to generating engaging activities and refining existing content, using AI tools can streamline the process, further free up valuable bandwidth, and spark our creativity. However, thoughtful consideration and review remain essential to ensure that we provide the most accurate, relevant, and aligned training possible to our stakeholders. With the appropriate balance, AI can be a powerful partner in developing high-impact, well-crafted training experiences. N

Abby Guillory, MLIS, CRA, is the Assistant Vice President for Research at the University of Tulsa. Abby, an NCURA Distinguished Educator, has over 20 years of experience in grant administration across higher education, non-profit, and for-profit organizations. She serves as a NCURA Fundamentals Traveling Workshop Faculty Member, author of numerous NCURA articles and publications, and committee member and presenter, both regionally and nationally. She can be reached at Abby-Guillory@UTulsa.edu.

Lee Broxton, M.Ed., CRA, is the Manager of Research Education at the Georgia Institute of Technology. In this role, he leads the development and management of Georgia Tech’s comprehensive research education curriculum, which spans the full lifecycle of sponsored funding. With over 11 years of experience in research administration and more than 18 years in education, he is passionate about empowering research administrators, researchers, and students through innovative education while fostering a culture of compliance and excellence. Lee can be reached at lee.broxton@osp.gatech.edu.

Tolise Dailey, CRA, is the Training and Education Development Associate at Georgetown University. She received the National Council of University Research Administrators (NCURA) Julia Jacobsen Distinguished Service Award and the Distinguished Educator Award. She is an esteemed co-editor of NCURA Magazine and a traveling workshop faculty member for NCURA Level II: Sponsored Projects Administration. She can be reached at tcm9@georgetown.edu.

The requirements for documenting compensation charges to federal awards are complicated - we have a resource to help!
This resource provides the framework for understanding the federal requirements for documenting compensation charges to federal awards and the complexities in meeting the requirements, as well as the implications and potential re-percussions if not met.
Topics include:
• Requirements of supporting salary charges
• Determining what constitutes compensation
• Documentation and Internal Controls
• Issue and Risks
• Common Audit Findings
• Other considerations
• Example of internal control framework for compensation compliance

By Michele Kijeski
Before the Bots: A Day in the Life
They say no one becomes a Research Administrator on purpose. Most of us fall into it by accident—and for reasons that defy logic, many of us choose to stay. We love what we do, even though our days are filled with conflicting priorities, impossible demands, and deadlines that would require a time machine to meet.
Take a typical day in the life of a departmental research administrator (DRA). She starts her morning hunting down unresponsive PIs, who are more elusive than Bigfoot. Her lunch break? Spent chasing a $17.38 budget discrepancy. And just as the mid-afternoon slump hits, a postdoc appears with five years’ worth of international travel receipts for a grant that closes tomorrow.
But all of this must be put aside when she gets an email from the Central Office—the one that starts with “Quick question,” and ends in three days of data gathering, reviewing terms, and filling out forms.
the Central Office, With Love
The relationship between the DRA and the ever-watchful Central Office is… complicated. It’s symbiotic—neither can function without the other—and most days, they work well together toward a common goal. Occasionally, tensions can rise in the relationships between central offices and departments,
with departments viewing central offices as inflexible and Central Office staff viewing departmental research administrators as less experienced or knowledgeable. When Central sends over its latest honey-do list, the department administrator rolls up her sleeves and gets to work, wondering how she can make some of her daily tasks less tedious. Enter: the magical, mysterious world of Artificial Intelligence.
With the right tools in place, our overworked DRA can offload some of her more time-consuming tasks onto AI-assisted programs like Airtable, Ironclad, and ChatGPT Operator, which essentially act as electronic minions. How exactly? Let’s take a look.
Airtable: Not just a spreadsheet, but a smart, organized brain for RA data. How smart? Okay, maybe not Mensa-certified, but definitely sharp enough to recognize relationships between tables, link records like a pro, and fetch related info without requiring an expedition down the proverbial rabbit hole. Let’s say Central is asking our DRA for Dr. Elsewhere’s contact details, proposal status, and compliance info, which would typically require her to go spelunking through five folders and three databases. With Airtable, she
can just ask in everyday terms—no jargon necessary. With a little automation magic (and zero coding required), Airtable can pull it all together and present it like a well-behaved intern: fast, tidy, and no sneaking off for snack breaks.
Ironclad: Turns legalese into legal ease.
When Central Contracts asks her to review reporting or work product clauses, Ironclad can remember previous decisions made about similar language. Its AI can recognize her institution’s preferred terms and automatically flag them as "approved," so she may not even need to review them again. That can mean fewer emails back and forth between Central and the department during negotiations.
And when a contract reads like it's written in Ancient Egyptian, Ironclad’s AI can translate the legalese into plain English, so she has a fighting chance of understanding what she’s being asked to approve.
Best of all, when Dr. Smith starts asking about the status of that contract negotiation, Ironclad can give the DRA real-time updates directly from the system without having to wait for a response from Central.
ChatGPT Operator: A tireless (and slightly nerdy) digital assistant
Think of Operator as a digital sidekick (or a whole team of them, depending on how many browser tabs the DRA has open). It doesn’t fetch coffee—yet— but it can help prep her requests in online institutional systems like Cayuse, AURA, or even Office 365. While it won’t hit “Submit” for her (unless she’s set up deeper automation), it’ll handle the grunt work like a champ—with a bit of supervision, of course.
If it gets confused, it’ll politely tug on her virtual sleeve for help. She can toss it her 192-page user manual (just no more than 300 pages—it has limits), and it’ll actually read the thing to figure out how to help. It even plays nicely with online platforms like Research.gov, which means it can kick off that NCE request and leave it neatly queued up for the Central Office to finish.
With new programs in hand, the DRA faces one final obstacle. Despite the prevailing concern (or promise) that AI will someday take over her job entirely, there is a lot of elbow grease ahead of her to get these helpers working.
But as DRAs go, she’s a typical example of the breed, meaning she eats hard work for breakfast. And lunch. And sometimes dinner. She puts in the time necessary to accomplish her goal. Spreadsheets are converted to databases. Preferred terms are drafted and saved. System integration needs are identified and guidelines refined. Nights? Weekends? She’s heard of them—rumors mostly.
The effect is not immediate, but little by little, she sees a shift. Days’ worth of data gathering is replaced by a single fetch request. Encounters with contract terms become rarer than pictures of the Loch Ness monster. And now she has a loyal crew of electronic lackeys handling the administrative slog of portal form-filling.
With the Central Office’s needs now being met by her army of bots, the DRA can get back to managing her own departmental chaos. And though it’s unlikely to ever happen, she dreams of a day when AI can handle the majority of her workload, while she sips mai tais on the beach and texts in occasional instructions.
Until then, she’ll settle for sipping lukewarm coffee between AI-generated reports—and savoring the small wins, one automated task at a time. N

Michele Y. Kijeski is a Post-Award Research Specialist at the University of Chicago and a proud graduate of the University of Alabama. Her day-today includes award negotiation and setup, award closeout, and a curated selection of post-award adventures in between. Known as her team’s checklist guru and resident foodie, Michele dreams of one day launching a serialized saga featuring the fearless exploits of research administration’s unsung hero: Rita Reggs. She can be reached at mykijeski@uchicago.edu.
you next career opportunity the main attraction by featuring it in the NCURA Career Center.
JOBS are highlighted at the top of the job list and will remain there for the entire 30 days of the posting. Therefore increasing the visibility to possibly 000 plus research administrators and with over 65 posting daily, this is the way to have your opportunity standout.

By Doireann Wallace
In Mary Doria Russell’s 1996 novel “The Sparrow”, AI experts called ‘vultures’ interview workers to break down their work processes and methods to be computerised, at which point the workers can be fired. Early in the book, an astronomer, Jimmy, asks a Jesuit missionary who’s undergone the procedure to capture his language learning methodologies, if he should cooperate with the vulture his employers have hired. “Hold out for a while,” the priest advises, “Until the vulture does you, you still have some leverage. [...] Once they’ve got you stored, they don’t need you.” Jimmy’s colleague urges him to resist his natural habit for obedience since his cooperation would make it harder for everyone else. Still, Jimmy gives in to what he sees as the inevitable.
Nothing more comes of this bit of worldbuilding as the plot moves elsewhere, but I think of it often these days. When my research manager
professional association declares AI to be the key challenge our profession must adapt to. When a speaker at an AI training day waxes enthusiastic about a university that replaced its whole research office with data scientists. When social media ads repeatedly warn me ‘AI will not replace you. A person using AI will.’ Much like the shift in discourse on the climate crisis from ‘how can we stop this catastrophe before it’s too late’ to ‘how can we live with this since we don’t have the will to stop it’, the narrative about AI use is that it is inevitable. Society will change. Work will change. Without our consent. All we can do is learn to live with it. Perhaps fight for scraps as early adopters by positioning ourselves as midwives of a system that may eventually replace most of us.
The analogy is not perfect, of course. Nobody had to sit down and explain how they do their job to train Large Language Models (LLMs) like ChatGPT.
They are trained on everything we’ve already said and done. They’ve already plundered the collective resources of human knowledge, thought and communication, learned our habits of speech and how we structure ideas. Now, without actually knowing any of this, they produce a plausible simulacrum of a well-informed and reasonable human interlocutor. As such, they have the potential to augment, speed up or take over much of the work we do that involves parsing or generating text (or reading and writing as they were once called). But when it comes to adopting these tools in a professional context, I think the analogy does hold. We are needed, for the time being, to cooperate in developing the frameworks within which LLM use becomes standard practice.
“Clearly there are things we do that machines will never do - empathise, understand, connect, mentor, take into account the pressures and insecurities, the social and emotional needs, of the people we support.”
Could GenAI replace my role? Let’s break down what I do. I work in proposal development. My job is mainly to help researchers write more competitive proposals for collaborative funding schemes. I help them identify opportunities that match their research, understand funder requirements and improve how they plan and communicate a research project to align with these. GenAI tools already exist to do much of this, at least in some form. Researchers can upload funding scheme guides and model grant agreements to some LLMs and query eligibility criteria or eligible costs. They can upload a work programme (in EU collaborative funding schemes these contain dozens of specific call topics) and request matches for their interests. They just need to know how to ask. An LLM can take outline ideas, shape them into well-structured text that complies with a specified format, improve the clarity and concision of language. It can help brainstorm, suggest methods, activities, metrics to use, partners and stakeholders to engage. Of course their outputs need critical review, but LLMs are already quite good at a lot of this and we’re told they are improving rapidly. If we accept this generally rosy view of the GenAI trajectory, the greatest barriers to wider faster adoption in my profession look like trust and institutional inertia. LLMs bypass understanding. They are, researchers recently argued, “bullshit machines,” in that they are designed to produce plausibly truthful statements but are by nature indifferent to the actual truth of these statements (Hicks et al, 2024). So, we are right to ask whether we can trust them. To begin with we need trusted guidance on which tools are reliable and protect privacy and intellectual property, but we also need to test them in our professional contexts to make sure they are at least as reliable as we are in interpreting call requirements, providing guidance or critiquing proposal drafts. That’s where we encounter Jimmy’s dilemma. In the shortterm, there are undoubtedly interesting opportunities to cooperate with the new technology, to develop faster and more efficient systems and processes that save us time, even to reflect on and improve our work practices. We can use LLMs ourselves to brainstorm, suggest examples, summarise or
shorten texts. We can become, temporarily, valued intermediaries who know how to ask the right questions in the right form to get useful answers, and we will know, within our own professional domains, whether the answers make sense.
But what happens if we do get to a point where systems work well, can be trusted and have been developed and verified in practice? At what point will GenAI systems have scavenged everything they need to do our jobs better than us? Clearly there are things we do that machines will never do - empathise, understand, connect, mentor, take into account the pressures and insecurities, the social and emotional needs, of the people we support.
Some say this is where we’ll add value in the future. But given the current drive for efficiency, will a point come where these too lose value? After all, caring work is massively undervalued and underpaid in today’s knowledge societies compared to specialist technical expertise.
There’s nothing sacred about the research management profession. It has emerged and adapted in response to changes - including technological changes - in how research is funded and managed, and it should continue to adapt to serve its purpose, which is to provide a support system to enable excellent and impactful research that ideally benefits society. If GenAI tools can give researchers more time to do great research instead of writing proposals for schemes with low success rates, that’s a good reason to embrace them. But there are plenty of reasons to be skeptical right now. Even if serious technical risks like model collapse (LLMs degrading over time as they are trained on LLM outputs) (Shumailov et al, 2024) can be successfully overcome, there are major societal risks ranging from encoding and perpetuating existing bias and stereotypes, to surveillance and privacy concerns, to the massive energy resources needed to train and operate LLMs (Weidinger et al, 2022).
No system should stay unchanged for the sake of it, but no system should change for the sake of it either. When a new world is being built and hyped, we need to ask whose interests are served by this and whose world this is to be? Addressing the techno-solutionist myth that AI development is inherently benign and will inevitably serve the public good, Abeba Birhane notes that while it would be difficult to achieve consensus on what the public might want or need, “it would be even more absurd to assume that government interests or corporate practices align with those of the public, or that these bodies hold useful insight into the public needs and interests.” (Birhane, 2025). Before we consider how to use GenAI as a labour-saving device in our professions, I think we should be asking ourselves more fundamental questions. Is this emerging world shaped by undemocratic corporate and technocratic interests a world we want to live in? And if not, should we cooperate as though this world is inevitable, or should we hold out for a while longer? N
References
Birhane, A. (2025, February 18). Bending the arc of AI towards the public interest. Artificial Intelligence Accountability Lab. https://aial.ie/pages/aiparis/ Hicks, M.T., Humphries, J. & Slater, J. (2024). ChatGPT is bullshit. Ethics Inf Technol 26, 38. https://doi.org/10.1007/s10676-024-09775-5 Russell, M.D. (1996). The Sparrow. Villard. Shumailov, I., Shumaylov, Z., Zhao, Y. et al. (2024). AI models collapse when trained on recursively generated data. Nature 631, 755–759. https://doi.org/10.1038/s41586-024-07566-y
Weidinger, L., Uesato, J., Rauh, M. et al. (2022). Taxonomy of Risks posed by Language Models. FAccT '22: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 214–229. https://doi.org/10.1145/3531146.3533088

Dr. Doireann Wallace is Senior Interdisciplinary Research Funding Specialist at Trinity College Dublin, where she supports interdisciplinary and transdisciplinary research development and proposal development in Climate and Sustainability areas, primarily for Horizon Europe. She can be reached at wallacdo@tcd.ie.


By Laura Pattillo
Collaborate. It means to work together on an activity with the goal to produce or create something useful for others to use. Collaborate means coming together to work for the good of others. What is produced can help others perform their jobs or be something that helps the community. Remember the old sayings, “Great minds think alike.” “Birds of a feather flock together.” “The more the merrier.” Collaboration works like that. Great minds think alike, but great minds coming together can mesh ideas together to make ideas and products better.
This is the goal of NCURA’s Collaborate. NCURA’s Collaborate is a community
that is available to all members of NCURA. It is a place for members to come together to ask questions or chat with other members about what is happening in the world of Research Administration (RA). Members can find helpful information on almost all areas of RA. So how does Collaborate work to bring this information and people together? NCURA members and volunteers.
NCURA members and volunteers make up the Collaborate Community. It is because of people that Collaborate exists. Collaborate exists to bring NCURA members together to support and help one another. The community exists for members to work together to bring timely and helpful information to each other. It is great for networking and gaining those contacts that could be helpful in the future.
Collaborate provides a fabulous networking tool through its communities where members can post questions, answer questions, or just start a discussion about what is happening in RA. Communities range in topic from Departmental Research Administration to Subcontracting to even ones for each NCURA region. A new community was established at the beginning of 2025—Changing Federal Landscape. It was started to help all members experiencing the whiplash of the fast and furious changes that started occurring after January 20.
However, it is the volunteers that are Collaborate’s most valuable asset. Within each community, there is a group of volunteers working on a quarterly basis to produce a “deliverable” that can be useful to NCURA members. There are several types of deliverables. Deliverables include discussion on Hot Topics, YouTube Videos, Interviews, Collaborate Conversations, Surveys, and Articles. The goal is to bring NCURA members together and offer valuable timely information to research administrators.
NCURA’s Collaborate is more than a website. It is a community of research administrators working to support each other in this unpredictable time. By putting our heads together, we can solve problems and support one another through the ever-changing world of RA. We encourage you to use Collaborate at any time. It is more than just a useful tool. It is a community working in an ever-changing environment. N

By Mallory Ball and Daniel Gregory
This piece builds on a topic presented at the Association of University Export Control Officers conference held in May 2025. Export Control compliance continues to become more important in academic research, further increasing the need for collaborative approaches to solving compliance gaps that exist between the many siloed operations of universities. We will discuss how strengthening export control compliance processes can only occur through partnerships with stakeholders at your university – Because we all need a little help!
The Challenge
A department recently brought in an international researcher without realizing the project involved potentially controlled technology. The individual was

granted lab access before any compliance review, potentially resulting in an unlicensed deemed export. With no standardized review process and limited faculty awareness, the issue only surfaced weeks later, forcing a reactive scramble and highlighting the risks of siloed responsibilities.
Universities face growing complexities in export compliance due to siloed responsibilities, a lack of awareness across campus, and ever-evolving regulations. At many institutions, export control functions were initially embedded in legal departments or scattered across various administrative offices (GAO, 2020). An example of this is shown in Figure 1.1.
This diffusion creates gaps in accountability and hinders consistent application of policy. Even with a shift to centralized, export control responsibilities, early efforts can still suffer from limited visibility and inconsistent buy-in from stakeholders (UM System ECMP, 2023).
One significant risk is the burden placed on a single staff member or small team to monitor, interpret, and enforce a wide range of federal regulations. Without integrated systems or institutional support, solo staffers can become overwhelmed, leading to delays, missed reviews, or noncompliance. It is unlikely that one person would be able to complete the integration of these regulations into policy and procedure on their own. This is why partnership is key.
Recognizing that compliance cannot be sustained in isolation, adopting a partnership model that leverages existing university offices to embed export control into their everyday operations can help ensure compliance across campus. By working closely with offices like Sponsored Programs, Legal Counsel, Admissions, Global Safety & Travel, Tech Transfer, HR, and IT, you can identify natural points for collaboration and review.
Some partnerships are formal, such as inclusion in the international travel approvals, which includes a mandatory export control screening. This ensures that no travel to sensitive destinations is approved without oversight. Similarly, collaboration with Sponsored Programs integrates
export control into the grants lifecycle by flagging potential concerns during proposal review (GAO, 2020).
Other partnerships are more informal but equally vital. For example, having Tech Transfer route all international MTAs through your office, even before a formal review is requested, may be a helpful partnership. Reviewing COI disclosure reports can allow for proactive identification of foreign relationships or funding. Additionally, within Admissions, we’ve worked with both undergraduate and graduate teams to integrate export compliance screenings into the admissions/visa process.
These efforts have distributed responsibility and awareness throughout the institution. By embedding compliance checkpoints into the processes of multiple departments, we reduce the risk of noncompliance and alleviate the pressure on a single office. Most importantly, we've rebranded our role, shifting from gatekeepers to collaborators, and created a culture where export control is viewed as a shared responsibility and an integral part of university operations (JoSTC, 2022).
Efforts that would aid in partnerships and process improvements could be internal guidance documents, checklists, system flags, and training. The most essential tool for an institution is the Export Control Plan (ECP). This needs to be a plan that is applicable to the current processes at your institution. The ECP cannot be relied upon if it is not up to date (UM System EXMP, 2023).
It is crucial to involve stakeholders, specifically in determining the logistics of the workflow for form completion within an organization. If an institution already has a policy on international travel for employees, it can be made into a screening form. The goal is to make the policy understandable for those who must adhere to it. For international hires, it may be helpful to develop a checklist of yes or no questions that do not delve too deeply into the details but will at least help determine whether an export control review is required. Department heads completing the hiring process could answer these questions and take ownership of fact-finding regarding the employee's requirements to determine if their activities will trigger necessary screenings or license requirements, depending on the user or end use, based on the answers to the form. A recent preprint also suggests there may already be electronic systems in use for operations that may touch on export control requirements – utilize them (Liang,2025)! An illustration is shown in Figure 1.2.
One example of a successful integration of export control operations is to allow the Export Control Officer (often wearing multiple compliance “hats”) to be involved in the Pre-Award process. Working with Pre-Award to be the eyes and ears for standard export control requirements helps identify any potential international collaborations or shipments that could become a barrier while awaiting the identification of restrictive clauses or license determinations, had they not been identified early in the Sponsored Programs workflow. The goal of an Export Control Officer is not to be a hindrance, but to anticipate potential additional requirements in a way that does not hinder the proposal from successful execution (Center for New American Security, 2021).
Lessons learned when working to improve processes include the importance of relationship building between Export Control operations and all stakeholders. Travel, Admissions, Sponsored Programs, Legal Counsel, and Tech Transfer, among others, are too many areas for one Export Control Office, no matter how centralized, to manage without a little help. Building relationships with stakeholders allows both sides of the operation to see the importance of their respective processes and gives shared ownership over compliance gaps (GAO, 2020). Through this benefit, Export Control Officers can work with stakeholders to provide a clear and consistent message on requirements and their application. This enhances the compliance culture on campus and increases visibility of Export Control policy and procedures.
In our current ever-evolving regulatory landscape, export compliance can become more challenging by the day while being more critical than ever. Building partnerships and developing tools with inputs from across campus allows export control offices to strengthen their ability to ensure compliance. We encourage readers to take what they’ve learned and reach out to a potential new partner on their campus and collaborate to identify how you could work together to implement export controls into a new or existing process. Export compliance is a campus-wide responsibility, and building a culture of partnership makes ensuring compliance a much more attainable goal for everyone – because we all need a little help! N

References
Center for a New American Security. (2021). Rethinking export controls: Unintended consequences and the new technological landscape. CNAS. www.cnas.org/publications/reports/rethinkingexport-controls-unintended-consequences-and-the-new-technological-landscape Journal of Strategic Trade Control (JoSTC). (2022). Volume 1, Issue 1: Strategic Trade Control in Higher Education. https://strategictradecontrol.org/journal Liang, W. (2025). Strengthening compliance and regulatory strategies to navigate trade restrictions and sanctions [Preprint]. ResearchGate. www.researchgate.net/publication/390194384 University of Missouri System. (2023). Export Compliance Management Program. www.umsystem.edu
U.S. Government Accountability Office (GAO). (2020). Export Controls: State and Commerce Should Improve Guidance and Outreach to Address University-Specific Compliance Issues (GAO-20-394). www.gao.gov/products/gao-20-394

Dr. Mallory Ball is the Director for Research Compliance and Integrity at Western Carolina University, where she oversees institutional compliance in areas including IRB, IACUC, IBC, Export Control, and Conflict of Interest
A Certified IRB Professional with expertise in social and behavioral research, she previously served as an IRB Administrator at East Carolina University and now leads initiatives to streamline post-approval monitoring, improve research integrity training, and strengthen cross-office partnerships in export control compliance. She can be reached at mball@email.wcu.edu.

Daniel Gregory is an Export Compliance Specialist at Penn State University where he helps faculty and staff navigate complex US export control regulations and laws. He is skilled in leveraging partnerships and technology to streamline workflows and increase efficiency to minimize administrative burden for researchers and staff. Prior to joining Penn State, Daniel held export compliance positions in academia at Vanderbilt University and East Carolina University and in industry at Siemens Corporation. He has over 10 years’ experience with export licensing, government regulations, developing and delivering training, and improving regulatory programs. He can be reached at dvg5728@psu.edu

Facing Uncertainty Head-On: Legal Action, Policy Shifts, and the Future of Research Funding
Based on the June 26th NCURA Navigating Change Conversation Series with Roseann Luongo, Education + Research Senior Director, Huron; Tanta Myles, AVP-RIA, ORIA, Georgia Institute of Technology; Christine G. Savage, Choate, Hall & Stewart LLP; Martin Smith, Director, Education & Research, Huron
Disclaimer
This article is for informational purposes only and does not constitute legal advice. It reflects insights of attorney Christine Savage on current issues affecting research institutions. If you are facing a specific legal issue, you are strongly encouraged to seek individual counsel. The content is an attorney work product and intended for internal use only. They should not be reposted externally or used in a manner suggesting authorship by others.
June 2025 marks a period of unprecedented flux in the world of research administration. In NCURA’s recent Navigating the Changing Federal Landscape conversation, a panel of seasoned experts—Roseann Luongo (Huron), Tanta Myles (Georgia Tech), Christine Savage (Choate Hall & Stewart), and Martin Smith (Huron)—joined Executive Director Kathleen Larmett to reflect on where we are, what’s unfolding, and how institutions can prepare for what's next.
A live poll of attendees revealed what many are feeling: it’s not just one issue—it’s everything, everywhere, all at once. From budget uncertainties to terminated awards, indirect cost rate challenges, and heightened regulatory scrutiny, institutions are under pressure from all angles.
Tanta Myles emphasized the ripple effect of award terminations, noting that institutions are grappling with long-standing programs—some decades old—suddenly being called into question. Roseann Luongo added that even routine processes like no-cost extensions and federal draws now face new scrutiny. “This is not just business as usual with new paperwork—it’s a fundamental shift in how research is managed.”
Christine Savage brought encouraging news from the legal front: “Courts are acting swiftly and decisively,” she said. Recent decisions in Massachusetts and California have invalidated blanket terminations, reaffirming that district courts can and will hear these cases. In one high-profile decision, the judge not only ruled in favor of reinstating NIH funding but refused to grant the government’s request for a stay—demanding that funds be restored immediately.
Still, Savage cautioned that litigation is ongoing. Cases tied to the ambiguous “no longer effectuates agency priorities” language remain active. Meanwhile, lawsuits by institutions like Harvard are testing the limits of government authority over grant terminations and the processing of student and research visas. The outcomes of these cases will shape how institutions navigate funding, compliance, and risk for years to come.
Martin Smith provided a deep dive into the FAIR model discussions led by the Joint Associations Group (JAG). Institutions are currently modeling the impact of two main options: a simplified flat-rate model and a more granular, cost-specific approach. While model one may yield less recovery, institutions appear to favor its transparency and administrative ease. “At a time when resources are shrinking, simplicity matters,” said Smith. But any change will require institutions to rethink infrastructure. “Whether it’s budgeting tools, ERP systems, or training, you’ll need to be ready to pivot quickly,” he added, especially if changes are written into the next federal budget.
The panel stressed that these research-specific issues aren’t occurring in isolation. Declining enrollment, stagnant endowments, inflation, and workforce challenges all compound the strain. “This isn’t just about research,” Myles noted. “It’s a test of how our institutions can adapt across the board.”
Savage also raised growing concerns around cybersecurity, whistleblower complaints, and foreign collaborations. “Even as funding tightens, the compliance expectations aren’t shrinking,” she warned. New legislative proposals like the CARGO Act and increased scrutiny over foreign support signal more to come.
Luongo concluded with a call to action: “Now is the time to lean into innovation. Let’s not just react—let’s lead.” She encouraged attendees to revisit risk assessments, improve internal controls, and consider new models for collaboration and cost recovery.
The path forward is anything but clear, but one thing is certain: research administrators are navigating this uncertainty together. As Savage put it, “The good news is, we’re still here—and we’re in this together.” N
The slides and webinar recording are available for all on NCURA’s Changing Federal Landscape website.
Navigating the Changing Federal Landscape: Legal Shifts, Compliance Pressures, and the Road Ahead
Based on the July 23rd NCURA Navigating Change Conversation Series with Caroline Beeman, Director at Maximus, Mak Karim, Maximus Senior Consultant (Former National Director of HHS-CAS), Jeff Seo, Chief Research Compliance Officer at Northeastern University, and Scott Sheffler, Partner Venable LLP
Disclaimer
This article is for informational purposes only and does not constitute legal advice. It reflects insights of attorney Scott Sheffler on current issues affecting research institutions. If you are facing a specific legal issue, you are strongly encouraged to seek individual counsel. The content is an attorney work product and intended for internal use only. They should not be reposted externally or used in a manner suggesting authorship by others.
The federal research environment continues to evolve rapidly, challenging institutions to adapt to emerging guidance, regulatory constraints, and litigation trends. NCURA’s recent webinar in the Navigating the Changing Federal Landscape series brought together key voices to decode what these shifts mean for research administrators on the ground.
Curtailing Foreign Subawards: New NIH and NSF Guidance
Scott Sheffler (Venable LLP) opened the session by addressing new NIH and NSF policies related to international collaborations. NIH’s May guidance proposed ending most foreign subawards, citing transparency concerns and limited oversight under FFATA reporting. Although initially presented as “prospective only,” subsequent clarification in July revealed that even renewals and non-competing continuations of existing awards would be affected.
To mitigate research disruption—particularly in human subjects studies NIH introduced a “parallel award” model. Under this structure, the U.S. institution and the foreign collaborator would each receive a separate prime award, with strict limitations on automatic carryover and other flexibilities. Institutions are now preparing for increased administrative burden and greater federal oversight.
NSF followed suit with new research security requirements, effective October 2024. Key provisions include mandatory training for senior/key personnel on foreign influence, export control, and ethical research practices Annual disclosures will now require certification that no individuals are participating in “malign foreign talent recruitment programs.” These policies apply to all recipients, regardless of annual NSF funding thresholds, further broadening institutional compliance obligations.
Sheffler also walked attendees through recent legal developments, particularly the Casa Supreme Court decision, which significantly restricts the use of nationwide injunctions. This means legal challenges to federal grant policy—such as indirect cost caps—will only apply to plaintiffs named in the suit or their member institutions. As a result, university associations will play an even more critical role in advocacy and legal protection.
In the case of Association of American Universities v. Department of Defense, plaintiffs secured a preliminary injunction halting DOD’s 15% indirect cost cap, but only for their members. Litigation will continue, with a final ruling expected later this year.
Grant terminations were another pressing topic. Panelists acknowledged the growing trend of abrupt grant cancellations and shared practical advice for institutions pursuing termination cost reimbursement under 2 CFR §200.472. While some institutions have successfully appealed terminations— particularly with NIH—success often depends on quick action and familiarity with the agency’s unique appeal process.
Caroline Beeman (Maximus) introduced the Joint Associations Group’s FAIR (Financial Accountability in Research) model—a proposed alternative to the current F&A rate structure. In response to mounting political pressure for simplified and transparent indirect cost recovery, the FAIR model offers a tiered structure:
• Base Option - a fixed 15% (of total budget) General Research Operations (GRO), with 10% fixed additional for research IT and facilities.
• Expanded Option - a fixed 15% (of total budget) General Research Operations (GRO), with four additional categories of formerly indirect costs to be charged directly to grants going forward – “Regulatory Compliance”, “Award Monitoring, Oversight, and Reporting”, “Essential Research Performance Facilities” (facilities costs), and “Research Information Services”.
While the model has faced mixed feedback, it represents a proactive step in shaping the conversation rather than reacting to imposed caps. Beeman emphasized the importance of transparency, clarity, and adaptability in any future cost recovery framework.
Mak Karim (Maximus) provided an update on the HHS Cost Allocation Services (CAS) structure. Following consolidation to two primary offices (Rockville and Dallas), staffing shortages have slowed rate agreement processing. Institutions are encouraged to submit proposals through the new ICAS portal and be prepared for delays.
Both Karim and Sheffler highlighted expected revisions to 2 CFR Part 200, likely to include broader agency termination rights and potential redefinitions of indirect cost policies—changes that could reshape research finance for years to come.
In today’s shifting regulatory climate, staying informed is no longer optional—it’s essential. This session underscored the importance of proactive compliance planning, legal awareness, and collective advocacy. As NCURA members prepare for what’s next, one message rings clear: vigilance, adaptability, and collaboration are our strongest tools. N
The slides and webinar recording are available for all on NCURA’s Changing Federal Landscape website

www.facebook.com/ncuraregioni
Happy Summer! Region I is in full swing preparing for the Annual Meeting in Washington, D.C. We’re excited to attend the educational sessions, social gatherings, and volunteer opportunities. This event always brings renewed energy—a chance to learn, reconnect with long-time colleagues and friends, and welcome new faces to our NCURA community. A sincere thank you to Christopher Medalis and the Volunteer Committee for their dedicated efforts in recruiting and organizing volunteers for this meeting. Your hard work and commitment are truly appreciated!
Congratulations to our two Region I travel award recipients: Jeffrey Cosier, University of New Hampshire and Jeffrey Moyer, University of Massachusetts Boston. Well done!
Looking ahead, Region I is busy organizing events for the second half of the year. Keep an eye out for upcoming Wicked Good Chats, Regional Workshops (some new!), and other educational opportunities from our Curriculum, Professional Development, and DEI Committees.
In light of these challenging times, we are also planning a few fun and informal events designed to help us stay socially connected. These gatherings might be the perfect way to get out, unwind, and connect with fellow NCURA Region I members. Invites will be coming soon!
At the time of writing, our Governance Committee is preparing to submit the slate of candidates for the upcoming Region I election. By the time you read this, voting should be complete—and we’ll be welcoming a new group of leaders ready to bring fresh ideas and perspectives to the region. We're excited to see how their leadership shapes our future. Thank you, Jill Mortali and the Governance Committee, for gathering a great slate of candidates!
A heartfelt thank-you goes to Robert Prentiss, Chair of the Communication and Membership Committee, and his incredible team. Their efforts to share timely updates on our social media platforms, promote regional events, and support our Spring Meeting and New Member Virtual Coffee Hours have been invaluable. We especially appreciate the warm welcome they extend to new members, helping them feel connected and informed from day one. Another special shout-out to Karen Markin, our dedicated Secretary, for her tireless work each week in preparing the regional email updates. It's no small task, and your consistency and attention to detail do not go unnoticed—thank you!
Wishing everyone a safe, joyful, and refreshing summer!
Stacy Riseman, MBA is the 2025 Chair of Region I and serves as Director, Office of Sponsored Research at the College of the Holy Cross. She can be reached at sriseman@holycross.edu.
www.ncuraregion2.org

www.facebook.com/groups/ncuraregionii
It’s officially the Dog Days of Summer and that means I am looking forward to seeing and meeting you all at the Annual Meeting!
Following AM67, NCURA Region II is pleased to announce that registration is now open for our 2025 Virtual Fall Meeting! This year’s diverse program will have something for everyone at the low cost of $115 if you register by the end of August! Please visit the 2025 Region II Fall Meeting website, www.ncuraregion2.org/regional-meeting, for more information. We look forward to connecting online in October.

Even if you’re not a part of Region II, we welcome you to attend or present at the virtual meeting. You don’t have to register to present, so it’s a great way to build your experience all from the comfort of your desk. We’re also open to suggestions for good topics, as always! Please email me with your thoughts.
Additionally, the Region II Leadership Development and Nominating Committee is accepting nominations for the Region II Distinguished Service Award. The Distinguished Service Award is an annual award to recognize members of NCURA Region II who have made sustained and significant contributions to the Region. The Awardee(s) will be recognized on a specific day and time during the 2025 Virtual Regional Meeting, October 28-29, 2025. Complete nomination packets must be received no later than September 8, 2025. For information on the nomination process and requirements, please visit www.ncuraregion2.org/awards.
Watch for announcements on the Region II website, Facebook, Collaborate and in e-blasts and the quarterly newsletter. Please reach out to me if you have any suggestions for programming, networking opportunities or events/activities.
Cassie Moore is the 2025 Region II Chair and serves Assistant Director at the Office of Research Administration at the University of Maryland, College Park. She can be reached at cmoore17@umd.edu.

www.ncuraregioniii.com

www.facebook.com/ncuraregioniii
Happy Summer from Region III! We had a flamingo-fabulous meeting in Louisville, Kentucky in April and members were thrilled to see colleagues and network. At the lunchtime business meeting on April 29th, we recognized members of the 2025 Program and Planning Committees for their efforts. We were also excited to announce our regional award winners. Emily Devereux (University of South Carolina) received the Region III Distinguished Service Award. Kristen Hartung (University of Arkansas) won the Pam Whitlock Rising Star Award. Erika Cottingham (Auburn University) was awarded the Sandy Barber Volunteer Award. Additionally, Shelby Mills (University of South Carolina) received the Region III travel award. Congratulations to our amazing awardees! We appreciate all you do for Region III.
The business meeting ended with our traditional Region III Executive Committee leadership transition: Carpantato “Tanta” Myles (Georgia Institute of Technology) completed her term as Immediate Past Chair, Jeanne Viviani-Ayers (University of Central Florida) moved into the role of Immediate Past Chair, Celeste Rivera-Nunez (University of Central Florida) became Chair, and Ford Simmons (Medical University of South Carolina) is now Chair-Elect. We are also excited to welcome a number of new committee coordinators to our flock. We are grateful for their commitment and look forward to seeing their accomplishments over the next year.
Finally, we are excited to announce that our May 2026 Region III’s Spring Meeting will take place in Myrtle Beach, South Carolina. Planning is underway led by our Chair, Celeste Rivera-Nunez. If you are interested in volunteering, please let us know!
We hope to see lots of Region III members at this year’s Annual Meeting, August 10-13, 2025! Until then, be sure to connect for the latest regional news by connecting on Facebook, Twitter, Instagram, the RIII Members Collaborate Community, and at NCURARegionIII.com.
Rebecca Wessinger is Region III Secretary and serves as the Director of Research Operations, for the University of South Carolina, Molinaroli College of Engineering and Computing. She can be reached at RRWessinger@sc.edu.
www.ncuraregioniv.com

www.facebook.com/pages/Ncura-Region-IV/ 134667746605561
It sure has been the summer of change, challenge, and pivoting! Many members are implementing new grant and financial systems, restructuring, facing budget challenges, and managing the ever-evolving federal landscape. Thankfully, NCURA connects us with resources and more importantly with each other.
My goal as Chair is to increase membership involvement. Region IV has 1632 members, and not all members participate in activities. I want to hear what type of activities members want to be a part of. What can NCURA do for members that we are not doing regionally or nationally? How can NCURA make connections for you?
In the last issue, because of timing, I was unable to note three other wonderful Chicago Spring Meeting activities. Awards were given, a tool was launched, and we started selling regional swag.
Distinguished Service Awards were awarded to Kathy Durben (Marquette University), and Diane Hillebrand (University of North Dokota). Tricia Sutton (Washington University in St. Louis), Heather Taylor (Medical College of Wisconsin), Shaunte Baboumian (Washington University in St. Louis), and Sarah Smith (Case Western Reserve University) received travel awards to attend the regional meeting. National travel awards were given to Brianna Galli (University of Michigan) and Lupe Cotto (Northwestern University).
A Volunteer Matching Tool was launched. The tool maps a person’s interest, skills, availability, and personality to a committee or role to consider. Check it out at www.ncuraregioniv.com/volunteer-matchmaking. Region IV has swag! We have It Depends T-shirts and Region IV mugs, caps, and tote bags. I invite all NCURA members to celebrate their dedication to the profession by visiting the region’s website and making a purchase.
Thank you to Region IV members that served as track leads for AM67. Those being Michelle Schoenecker (University of Wisconsin-Milwaukee), Laina Stuebner (University of Wisconsin Green Bay) and Keith Page (Washington University in St. Louis).
Thanks to Immediate Past Chair, Roger Wareham (University of WisconsinGreen Bay), the region board successfully updated the region’s administrative procedures. One of the next tasks on hand is to create a regional timeline (of activities and deadlines) to help future volunteers and board members plan and prepare. The board will also review board positions descriptions to ensure they are in line with the bylaws and updated administrative procedures.

Have a great summer end and mark your calendars for the 2026 Spring Meeting in St. Louis, MO at the Hilton St. Louis Ball Park, April 19-22.
Sandy Fowler is the 2025 Chair of Region IV and serves as the Assistant Dean for Research for the College of Agricultural and Life Sciences at the University of Wisconsin – Madison. She can be reached at sandy.fowler@wisc.edu.

www.facebook.com/ncuraregionv
It will be such a joy to see so many of your bright, smiling faces at Annual Meeting 67! Let’s keep the momentum going—be sure to stay connected with our Region V community through the NCURA Collaborate platform and on LinkedIn. These are spaces where every voice is valued, supported, and uplifted.
We’ve made exciting progress in planning the Region V Fall 2025 Meeting, and I can’t wait to welcome y’all to San Marcos this October! I want to extend my heartfelt thanks to our incredible Program Committee Chairs:
• Ketti Eipers-Smith, Education Committee Chair
• Lizette Gonzales, Entertainment Committee Chair
• Sylvia Moore, Sponsorship Committee Chair
Your leadership and dedication are shining examples of the impact our volunteers make. Thanks to you—and the amazing volunteers on your committees—this year’s Fall Meeting will offer meaningful professional development, engaging local entertainment, and a strong sense of appreciation from our generous Region donors. Your tireless efforts ensure that attendees will leave feeling energized, informed, and inspired to lead.
I’m especially excited to announce our keynote speakers:
• Dr. Art Markman will kick things off with our opening keynote
• Dr. Susan Wyatt Sedwick will close the meeting with her insights and inspiration
You can learn more about both speakers and their contributions to our field on the NCURA Region V Website.
Registration is now OPEN! Visit the NCURA Region V Website for hotel booking details, the schedule-at-a-glance, registration info, and updates throughout the year. We’re planning 8+ workshops, 40+ sessions, and plenty of wellness and networking activities to keep you connected and refreshed.
And don’t forget—our Region V programming continues year-round! Join us for our Virtual Lunch & Learns every second Wednesday at noon CT.
Let’s move through the rest of the year with solidarity, community, and vision—and continue to celebrate The Power of Y’all!
Liz Kogan, CRA, is the 2025 Region V Chair and serves as the Director of Research Administration, College of Education, at the University of Texas at Austin. Liz can be reached at liz.kogan@austin.utexas.edu.
www.ncuraregionvi.org

www.facebook.com/groups/729496637179768
Summer is here! The first half of 2025 has been a whirlwind, and now it’s time to set our sights on what’s ahead.
The Education and Professional Development Committee continues to deliver valuable programming through the popular lunchtime series. Stay tuned for upcoming seminar announcements—keep an eye on your inbox! This fall, we’ll also host a Meet Your Officers quarterly session and an information session for the 2026 LeadMe program. Be sure to check our website regularly for updates.
A big thank you to everyone who attended the April seminar, Subaward Series: Behind the Single Audit, presented by Jennifer Cory and David Scarbeary-Simmons. And exciting news—our regional membership is now approaching 1,400 members!
This year at AM67, we’re embracing the theme: “Forward Focused: Priming for Change.”
We’re also eagerly anticipating the big reveals about AM69 and RM2026. Get ready for Tuesday night—we’ve got a fun and interactive game planned that you won’t want to miss! We’re still looking for volunteers, so if you’re able to help, please sign up via the links in the volunteer opportunities email. And don’t forget to stop by our connection table to pick up your regional lanyard and swag—and to say hello. We’d love to see you!
Looking ahead, our Fall Regional Meeting in Costa Mesa, CA is just around the corner! Registration is now open at ncuraregionvi.org/rm2025. Join us as we explore the theme: “Building the Future of Research Administration.”
Our Planning and Program Committees have been hard at work to bring you an outstanding experience. This year’s program features seven tracks, four workshops, and some of our most comprehensive sessions yet. Evening events and our keynote speaker will be announced soon.
Interested in volunteering? Please contact our Membership and Volunteer–Meetings Chairs:
Mich Pane – michiko@stanford.edu
Kari Vandergust – karivan@stanford.edu
Matt Michener is the Region VI Chair and serves as Associate Director of the Office of Research Support and Operations at Washington State University. He can be reached at matthew.michener@wsu.edu.

Greetings, Jackalopes!

www.facebook.com/groups/NCURARegionVII
We are looking forward to a fantastic experience at the Annual Meeting (AM67)! The Region VII leadership is working hard on activities and networking opportunities for our members who attend.
We’d like to congratulate our travel award winners for this year:
Annual Meeting Travel Award:
Betty Rasmussen, University of Colorado, Boulder
Regional Meeting Travel Awards:
Savannah Allshouse, Colorado School of Mines
Emma Ryan, Colorado State University
Trisha Southergill, Colorado State University
Taylor Wayment Memorial Travel Award:
Lori Shuler, University of Wyoming
In other news, our PUI committee, led by Sylvia Bradshaw, has cultivated a vibrant space for collaboration and connection among research administrators from predominantly undergraduate institutions. Through interactive quarterly meetings focused on current hot topics, the committee has fostered meaningful peer exchange, highlighted shared challenges, and celebrated innovative solutions emerging from PUIs across the region.
Now we are completing final preparations for the 2025 Region VI and VII meeting in Costa Mesa, CA. We have an exciting program planned and can’t wait to “Build the Future of Research Administration” with you! This year’s conference events will include a fun and fascinating keynote speaker, a beachthemed dance party on Tuesday evening, and lots of opportunities for networking with our colleagues. We can’t wait to see you there!
Stay connected to the Region: Join our Facebook page and the Region VII community on NCURA Collaborate or visit our regional website: www.ncuraregionvii.org
Want to volunteer? Contact Volunteer Coordinator Betty Rasmussen (betty.rasmussen@colorado.edu). Betty will be able to advise on the various opportunities available.
Brigette Pfister is the 2025 Region VII Chair and currently serves as the Financial Compliance Manager in the Office of Sponsored Programs at Colorado State University. She can be reached at Brigette.Pfister@Colostate.edu.
Dear Region VIII Members,
I hope many of you are planning to attend the Annual Meeting this August. It promises to be an exciting and enriching event! The program features a strong global track and numerous opportunities to connect, including our Monday Region VIII dinner, a NetZone together with our friends from Region V on Monday evening, and a fun conference event with our very own Region VIII limbo game on Tuesday night.
We will also hold our annual Regional Business Meeting during the conference, where we will share updates on membership, leadership, budget, committees, events, and future opportunities. As always, this is also a time to reflect on the vital role of volunteering, the cornerstone of our organization.
We are currently seeking enthusiastic new officers to help lead Region VIII into the future. Open positions include:
• Chair-Elect
• Secretary
• Treasurer-Elect
• National Board Member
You can find detailed role descriptions on the Region VIII website. We encourage applications from across all continents and hope to receive plenty by September 1st. If you have any questions about the roles or the nomination process, please don’t hesitate to reach out. We would be happy to discuss the opportunities with you.
I also warmly invite you to submit an abstract for the FRA or PRA Conference in Puerto Rico, March 2026. The deadline is August 20th. The Global Track is ideal for sessions on international collaboration and compliance, cross-border financial management, pre-award strategies, capacity building, and more. Your insights and experiences are incredibly valuable to the global research community and the setting will be fabulous.
Lastly, I am pleased to share that all Region VIII members are warmly invited to attend the Regional Meeting of Regions VI and VII, taking place in Southern California in November 2025. Registration is now open, and we encourage you to take advantage of this wonderful opportunity to connect with colleagues from across regions. A heartfelt thank you to Regions VI and VII for their generous invitation. It is truly appreciated!
Looking forward to seeing many of you soon!
Tine Heylen is the 2025 Region VIII Chair and serves as Senior Advisor, European and International Projects at KU Leuven. She can be reached at tine.heylen@kuleuven.be.

On Tuesday, April 15, researchers from across academia and industry, including Microsoft and Batelle, gathered at the Artificial Intelligence and Machine Learning Symposium to share their latest research and to network. Organized by Office of Research Senior Director of Research Development Pamela Clarke and College of Engineering and Architecture Assistant Professor Anietie Andy, Ph.D., the event showcased the diverse and increasingly essential applications of AI and machine learning. From linguistics to medical imaging, the researchers showcased how revolutionary this technology can be, as well as its limitations.
Many of the presentations focused on the power of AI to improve image quality and analyze massive amounts of data. This could lead to everything from a deeper understanding of how the brain functions to faster, more accurate, and cheaper imaging of signs of disease at the cellular level. Along with potential research applications, several of the talks addressed ways to make the AI technology we use in our daily lives—from chatbots to transcription services—better. For example, Denae Ford Robinson, Ph.D., a principal researcher for Microsoft Research, helped build a framework for identifying the potential psychological impacts of services like ChatGPT. Human-Centered AI Center Senior Research Scientist Lucretia Williams, Ph.D., worked with Google to record over 600 hours of African American voices with the goal of reducing the error-rate of speech-to-text devices that Black users face.

The symposium ended with a roundtable on the ethics of AI, focusing especially on its growing use in education. The discussion gave faculty and students the opportunity to acknowledge both the technology’s usefulness in things such as coding and research and the potential issues of plagiarism and inaccurate information.
These projects and discussions only scratch the surface of the breadth of AI work being done at Howard. Andy and Clarke intend for this to be the start of a broad expansion of the university’s AI projects, and plan to make the symposium a recurring meeting where researchers can connect and discuss opportunities in the field.
Welcoming the attendees, Clarke emphasized the need to work together to position Howard as a leader in AI research.
“Our overall vision is to build out a national AI research center led by Howard University,” she said. “It’s an area of research where we have strength, but we have been working in silos. So this is one of the first opportunities from an institution standpoint to really get a sense of what our faculty are doing and to hear from our collaborators.” N
For the complete article visit https://thedig.howard.edu/all-stories/research-month-symposiumhighlights-ais-growing-benefits-and-potential-biases
TRAVELING WORKSHOPS
• Contract Negotiation and Administration Workshop September 15-17, 2025 Indianapolis, IN
• Financial Research Administration Workshop September 15-17, 2025 Indianapolis, IN
• Level I: Fundamentals of Sponsored Projects Administration Workshop September 15-17, 2025 Indianapolis, IN
• Level II: Sponsored Projects Administration-Critical Issues in Research Administration November 17-19, 2025
New Orleans, LA
• Level I: Fundamentals of Sponsored Project Administration Part II: Post-Award August 26-27, 2025
• Level I: Fundamentals of Sponsored Project Administration Part I: Pre-Award September 9-10, 2025
• Level I: Fundamentals of Sponsored Project Administration Part II: Post-Award September 30 - October 1, 2025
• Level II: Sponsored Projects AdministrationCritical Issues in Research Administration October 6-9, 2025
REGIONAL MEETINGS
• Region II (Mid-Atlantic) October 28-29 2025 Virtual Meeting
• Region V (Southwestern) October 19-22, 2025 San Marcos, TX
• Region VI (Western)/Region VII (Rocky Mountain) November 2-5, 2025 Costa Mesa, CA
ONLINE TUTORIALS—10 week programs
• A Primer on Clinical Trials
• A Primer on Federal Contracting
• A Primer on Intellectual Property in Research Agreements
• A Primer on Subawards
NATIONAL CONFERENCES
• Financial Research Administration Conference March 19-20, 2026 San Juan, Puerto Rico
• Pre-Award Research Administration Conference March 22-23, 2026 San Juan, Puerto Rico
• Annual Meeting August 1-4, 2026 New York City, NY Critical Issues in Research Administration
For further details and updates visit our events calendar at www.ncura.edu.