

THOUGHT LEADERSHIP COMPENDIUM
Released on April 27, 2025
FEATURED ARTICLES BY





























Does AI Hallucinate of Electric Sheep?
Avoiding AI Nightmares: How To Ground LLMs In Reality

Authored by: Brian Neely, CIO/CISO

Nonsensical Gibberish...or Worse?
In his influential 1968 sci-fi novel Do Androids Dream of Electric Sheep, the basis for the movie Blade Runner, Philip K. Dick explored the philosophical and ethical questions that arise when artificial beings approach humanlike consciousness. As he explores this visceral and thought-provoking topic, Dick questions whether a non-biological entity is capable of taking on sentient characteristics, such as the ability to perceive, feel and be self-aware, blurring the line between human and artificial intelligence. While we hope that AI is still a long way from turning on its creators like the “replicants” in Dick’s story, concerns over what can happen at the intersection between technology and humanity are more relevant today than ever.
One of the most well-known issues with widely used large language models (LLMs) like OpenAI’s ChatGPT, Meta’s Llama or Google’s Gemini, is their tendency to “hallucinate,” or present false, misleading or illogical information as if it were factual. Whether it’s citing non-existent legal cases or claiming Acapulco is the capital of Antarctica, hallucinations typically happen when an LLM isn’t given proper context or enough highquality data.
Because LLMs use linguistic statistics to generate responses about an underlying reality they don’t actually understand, their answers may be grammatically and semantically correct but still make no sense at all. A poorly trained AI model with inherent biases or blind spots will try to “fill in the blanks,” but end up producing nonsensical gibberish, or worse.
Some have paralleled AI hallucinations with humans dreaming, creatively connecting seemingly unrelated data without logical grounding. Irrational, unexpected AI responses can be used to spark new ideas in creative writing, art or music, perhaps during an “outside-the-box” brainstorming session, but they can also be dangerous. They can erode trust in AI, lead to poor decision-making, and even result in harmful outcomes; you certainly don’t want hallucinations when it comes to medical diagnoses or self-driving car actions.
That’s why leading AI scientists continue advancing the battle against hallucinations, and why three popular approaches can be used to reduce or eliminate hallucinations entirely.


APPROACH 1:
Prompt Engineering
The simplest approach to reducing the odds of AI hallucinations is to put careful thought into how we interact with it. This means shifting our thinking from “can AI properly answer my question?” to “can I ask AI the right question?” Engineering a better prompt doesn’t require any specialized skills—it is something that anyone can do and works with virtually all GenAI models.
A prompt is simply the set of instructions that you give a GenAI model to get an intended response. Good prompt engineering involves providing proper context and perspective on what you’re asking, assigning the AI model a role and even telling it how you want the response to be structured.
Don’t just write a few words or a single sentence like a traditional Google search. Instead, be as detailed and comprehensive as possible—even use multiple prompts if necessary. Essentially, you are manipulating the input to influence the output, guiding the LLMs behavior without actually altering any of its core behavior. The better the prompt, the better the response.
APPROACH 2:
Retrieval-Augmented Generation (RAG)
This approach involves constraining an LLM to specific data sources or proprietary content so it provides more relevant answers. Instead of looking at its entire data set for context, the LLM will only use a predetermined subset of data to formulate its responses. RAG works well for organizations with highly dynamic data and well-defined content sources. In specialized areas, such as the medical field, a doctor could use an offthe-shelf LLM like ChatGPT that is configured to use their hospital’s medical library as a source, so all responses come from a medical perspective.
From a user’s perspective, RAG implementation is seamless and transparent, generally only requiring lightweight setup by admins to connect relevant data sources. The result is better, more accurate and, most importantly, context-aware responses.
Does AI Hallucinate of Electric Sheep? | Page 2 of 5

APPROACH 3:
Fine-Tuning
The most involved solution starts with a pre-trained LLM that has a general understanding of language. That model is then fine-tuned on smaller, task-specific datasets that contain high-quality, relevant examples on a certain subject—say, history. The goal is to fine-tune the model’s knowledge for the target task, in part by penalizing the model for generating irrelevant or implausible content.
This approach can be more expensive and timeconsuming, requiring specialized skills. You often have to use your own data—slow-changing data works best— and provide hands-on supervision to make sure you’re getting the best results. But if done correctly, fine-tuning can teach LLMs to give more precise and appropriate responses on narrow, specialized topics.
Does AI Hallucinate of Electric Sheep?
Page 3

Better Together
These three approaches aren’t mutually exclusive, and in fact work best when used in unison. By using good prompt engineering, pointing an LLM at relevant data sources and then fine-tuning it, you’re going to not only eliminate almost all hallucinations, but you will also be left with results that are both contextaware and highly accurate.
In the end, we need to be mindful of hallucinations as AI-powered technologies are increasingly relied on for their quick results and powerful decision-making capabilities, which can be further amplified in time-critical, highpressure situations.
In the meantime, as we ask AIs for answers on a daily basis, it’s probably not a bad idea to be thorough, provide context and review the results with a critical eye. You don’t want to be like the lawyers in 2023 who relied on ChatGPT to help prepare a case—only to find the documents were filled with entirely made-up legal citations.
And just like in Dick’s book, where replicants aren’t inherently good or bad, but simply responding to how they’re being used, we need to make sure that we put AI in the best possible position to be a benefit, not a hazard.
Does AI Hallucinate of Electric Sheep? | Page 4 of 5

About Brian Neely
As CIO and CISO for AMERICAN SYSTEMS, Mr. Neely leads the overall vision, planning, and management of technology, information, and cybersecurity-related resources throughout the enterprise. Mr. Neely began his career at AMERICAN SYSTEMS in 1996 and spent his first 10 years directly supporting the Defense and Intelligence communities.
Mr. Neely was selected as WashingtonExec’s CISO of the Year, named an ExecutiveMosaic Top 10 Federal sector CIO, an InformationWeek 500 winner for Digital Business Leaders and the Innovative Use of Technology, selected to Forbes’ Technology Council, and received a U.S. Presidential Commendation for 9/11 response. Mr. Neely is an alumnus of Virginia Tech (Engineering), Carnegie Mellon (Masters, Information Technology) and the University of Pennsylvania’s Wharton School of Business (Business).

We know what’s at stake.®
Founded in 1975, AMERICAN SYSTEMS is a government IT and Engineering solutions provider and one of the top 100 employee-owned companies in the United States, with approximately 1,600 employees nationwide. Based in the Washington, D.C., suburb of Chantilly, VA, the company provides Information Technology, Program Mission Support, Engineering and Analysis, Test & Evaluation, and Training Solutions to DOD, Intel, and civilian government customers. For more information, visit: www.AmericanSystems.com.

Government Contractors: You Need a Financial Action Plan
AUTHORED BY: DONNA DOMINGUEZ | FEBRUARY 18, 2025
For federal contractors, a single contract termination or stop-work order is typically a challenging yet manageable administrative burden; however, multiple contracts impacted simultaneously can trigger a full-scale operational crisis. The collapse of multiple (or all) contracts for an organization has far-reaching consequences extending beyond the prime contractor to include subcontractors, employees, and the entire supply chain. The ability to reassign employees to other contracts will be diminished, layoffs and furloughs are likely, and lower-tier subcontractors and suppliers will be forced to health work. The ripple effects will lead to financial instability across the contracting network.
Any business with ties to government contracts should prepare to be impacted. Create a financial action plan that can be immediately implemented if circumstances warrant it.
Step 1: Establish a cost service center for wind-down activities

Work stoppage carries it’s own cost, including the expenses associated with securing work, relocating employees, and overhead costs. Setting up a cost service center specifically for termination and stop-work activities can ensure that all costs related to winding down operations are properly tracked and allocated. Considerations include:
• Labor costs: employee severance, retention agreements, or relocation costs.
• Subcontractor and supplier claims: settling outstanding payments and ensuring documentation for potential recoverable costs.
• Lease obligations: contracts involving dedicated office space or facilities might require arrangements for subletting, breaking leases, or negotiating buyouts.
• Rate variances on cost-type contracts: unresolved indirect cost rate variances should be identified and included in Requests for Equitable Adjustments (REAs) or termination settlements.
• Legal and compliance costs: attorneys, accountants, and consultants engaged to support the settlement process may be considered allowable costs under the Federal Acquisition Regulations (FAR).
Step 2: Identify lagging costs that require inclusion
Even after a termination or stop-work order is issued, certain costs continue to accrue and must be factored into the final settlement. These include:
• Pending invoices: Ensure that all costs incurred before the order are invoiced and accounted for.
• Demobilization costs: any expenses related to securing materials, shutting down job sites, and transitioning employees off projects should be tracked.
• Ongoing fringe and benefits costs: health insurance, retirement contributions, and accrued leave obligations may persist beyond termination dates and should be considered.
• Equipment and asset depreciation: if government property or contractor-acquired assets were used exclusively for a terminated contract, their disposal or continued cost burden must be evaluated.
Step 3: Engage with legal and financial advisors
Government contractors’ ability to navigate work stoppages and maintain long-term financial stability will depend heavily on pristine documentation and thorough recordkeeping. Legal and financial advisors can help contractors establish recordkeeping best practices in compliance with contractual obligations and can assist with cost recovery efforts.
The bottom line
Contractors must be proactive in their response to widespread contract terminations. Early engagement with the contracting officer, thorough cost tracking, and strategic workforce planning can make the difference between a manageable situation and a financial disaster. By considering all potential cost impacts and maintaining robust documentation, contractors can position themselves for a fair settlement while protecting their business interests.
Navigating terminations, claims, and settlement packages can be complex, but you don’t have to do it alone. Aprio’s Government Contracting and Nonprofit teams are here to guide you every step of the way. Whether you need support with claims work, termination settlements, or strategic planning, we’re ready to help.
Connect with an Aprio team member today.

About Aprio
Since 1952, clients throughout the U.S. and across more than 50 countries have trusted Aprio for guidance on how to achieve what’s next. As a premier business advisory and accounting firm, Aprio Advisory Group, LLC, delivers advisory, tax, managed and private client services to build value, drive growth, manage risk and protect wealth, and Aprio, LLP, provides audit and attest services. With proven experience and genuine care, Aprio serves individuals, entrepreneurs, and businesses, from promising startups to market leaders alike. Aprio.com

How Zero Trust Opens the Path to Digital Transformation

If you haven’t been a victim of a cybercrime, it’s likely you have a friend or family member who has been. About 2,200 cyberattacks occur every day, resulting in more than 800,000 victims of ransomware attacks, phishing scams, or data breaches per year.
Traditional security models were built for the on-premise IT enterprise. But with the explosion in remote and hybrid work, users, data, and resources are geographically dispersed to every corner of the globe, creating more openings for hackers and bad actors to attack vulnerable data and systems.
To counter the increasingly dangerous digital environment and unlock digital transformation, organizations must look to zero trust security architectures as the key to secure infrastructure solutions.
WHAT IS ZERO TRUST?
Zero trust is a set of cybersecurity paradigms that focus on users, assets, and resources, rather than network-based perimeters. Zero trust represents a significant shift in implicit trust, meaning users and devices are no longer considered inherently trustworthy.
Compare traditional security models to a castle and moat. The castle is representative of an organization’s network and the network perimeter is the moat. Once the guards open the gate and lower the drawbridge, someone can come into the castle and essentially do whatever they want. If a bad actor were to penetrate to an organization’s
network, they can access all of the systems within, stealing sensitive data, implanting malware, or committing other malicious acts. Remote work essentially sets up more drawbridges to locations around the world. More drawbridges equal more vulnerability points.
Zero trust architecture, however, assumes that there are security risks already inside and outside of the network. Nothing inside or outside the network is trusted, requiring strict verification for every user and device before granting access to data and applications.
Zero trust is founded on five tenets:
• Assume a hostile environment. No asset is trusted; instead they are guilty until proven innocent.
• Presume breach. Zero trust assumes that malicious assets are already inside the network.
• Never trust, always verify. Access is denied by default.
• Scrutinize explicitly. Access to resources is conditional and can change at any time.
• Apply unified analytics. Provides data, applications, assets, and services.
These five tenets comprise the basic principle of leastprivilege access. Users only get the bare amount of access that they need to perform their tasks or missions. This is



achieved through technologies like multi-factor authentication, virtual private networks, microsegmentation, and data restriction and accessibility to only privileged users.
MYTHS OF ZERO TRUST
Sometimes, to understand something, it’s best to start with what it is not. Zero trust isn’t just one technology. It’s a paradigm shift, which emphasizes a set of cybersecurity principles that organizations implement across a range of technologies to address risk. Technology is just part of this shift. Of equal importance, organizations must prepare for workforce reskilling, adoption of new processes, organizational culture change, and a multi-year transformation process.
There is no silver bullet for zero trust; it is not something you can buy from a single vendor. Zero trust involves many providers and services whose products must work together. Additionally, organizations must identify how to either align or adjust their preexisting tools to meet zero trust requirements.
HOW TO ACCELERATE ZERO TRUST
Zero trust is key to achieving digital transformation. It is the North Star to a more resilient, secure organization and framework. As your organization embarks on its zero-trust journey, stay focused on the end goal and the benefits.
Zero trust provides a new virtual perimeter that secures data and application resources and can only be accessed by authorized and authenticated users and devices from anywhere. These users and devices are authorized based on identity and location- or security-posture-based context. The access of resources by users and devices is subject to granular and dynamic access policies that adapt to the current security risk profiles.
Our Zero Trust Playbook recommends a 4-part approach, based on implementation best practices and lessons learned to accelerate the implementation.
1. Develop the zero trust strategy for the organization. Resist the urge to jump into implementation activities before having created a clear picture of your as-is and to-be environment. Conduct a maturity assessment to understand what investments you have made already and how they align with zero trust requirements. This allows you to identify gaps and then create a realistic plan for filling them.
2. Deploy technology solutions. They should either focus on expanding or reconfiguring existing technologies or implementing something new (such as micro-segmentation) to fill gaps.
3. Enhance policies and processes. Although there is a lot of focus on technology solutions for zero trust activities, do not neglect the importance of reviewing work processes and practices and enhancing them to adopt cyber security principles. For instance, define the least privilege policies for systems administrators.
4. Develop your workforce. Pay special attention to developing your workforce’s relevant skills to enable them to embrace and adopt zero trust. It is important to foster a zero trust culture and mindset.
Zero trust reduces the attack surface and risk of enterprise-wide vulnerabilities while preventing threats from adversaries both inside and outside of the network. Zero trust also enables data sharing and risk management in mission partner environments, enabling government agencies and members of the Department of Defense to exchange information securely with partners. Information, when compartmentalized based on classification and mission access, can be quickly accessible to any user who needs it.
Zero trust is not a destination. It’s a journey, and an organization committed to this journey must continually leverage tools like CACI’s Zero Trust Playbook to assess and improve its cybersecurity posture and maturity.




Zero trust isn’t just one technology. It’s a paradigm shift, which emphasizes a set of cybersecurity principles that organizations implement across a range of technologies to address risk.
ABOUT CACI
At CACI International Inc (NYSE: CACI), our 25,000 talented and dynamic employees are ever vigilant in delivering distinctive expertise and differentiated technology to meet our customers’ greatest challenges in national security. We are a company of good character, relentless innovation, and long-standing excellence. Our culture drives our success and earns us recognition as a Fortune World’s Most Admired Company. CACI is a member of the Fortune 1000 Largest Companies, the Russell 1000 Index, and the S&P MidCap 400 Index. For more information, visit us at www.caci.com.

Sponsoredby:
Regulated industries, such as aerospace and defense, government, and utilities, have critical compliance and data security needs, and focusing on selecting the right ERP system is paramount.



Choosing the Right ERP in Regulated Industries
September 2024
Questions posed by: Cognitus
Answers by: Mickey North Rizza, Group Vice President, Enterprise Software, and Jeff Hojlo, Research Vice President, Future of Industry Ecosystems, Innovation Strategies, and Energy Insights



How important is a platform approach to ERP for organizations in regulated industries such as aerospace, defense, government, and utilities?
Adigital platformapproach isveryimportantforregulated industriessuch asaerospace,defense,government,and utilitiesdueto thecomplexity ofbusinessprocesses, thetremendousamountofdata these organizationshave,and regulatorycompliance.Accordingto Worldwide IDC Global DataSphere Forecast, 2023–2027: It's aDistributed,Diverse, andDynamic(3D)DataSphere (IDC #US50554523,April 2023),themanufacturingindustrywill generateon average (varyingbysubindustry)81,877EB(or 82ZB)ofdata in 2030.Thisisup from "just" 2EBin 2022.To putthatin perspective, by2026,manufacturingorganizationswill becreating7PBof data persecond.Thechallengeisthattheseorganizations don'thavethesystems,processes, and resourcesto takeadvantageofthisinformation:42%ofenterprises saythatthey underutilizedata.
Thismaybethecasebecausedata istheinputand theoutcome on which organizationsarefocusing,asdata isnecessary forapplications,models,and LLMs.Further,theinsightsand knowledgefromthisdata empowerorganizationsto innovate,meetcustomerneeds,and operatetheirbusinessesefficiently.When companiestakean enterprisewide platformapproach to data collation,analytics,and federation,regulatorycomplianceissimplified.
Cross-organization,cross-ecosystemERP platformshelp mitigaterisksand optimizetheopportunitiesassociated with this massivetroveofdata byestablishinga digital thread acrossthesupplychain forbetterinsightsand decision-making.This digital thread includesR&D and productdevelopment,procurement,contractsand financials,programmanagement, supplychain,production and operations,regulatory,and aftermarketservices.Usinga platformapproach,each ofthese teamshasaccessto the samedata.Theresultisfasterinnovation,a bidirectional flowofinformation acrossthe organization and ecosystem,reliableachievementofqualityand compliancegoals,anda unified approach to ongoing customerengagement.Embedded analyticsand AIcomplementeach oftheseareas.

How do private clouds make ERP modernization easier for regulated industry organizations with legacy systems?

Regulated industriesrequirecomplianceand data security.Privatecloudsmeettheirrequirementsand offera natural step in theprogression ofERPmodernization awayfromlegacyapplications. In IDC's2023 Worldwide Industry CloudPath Survey,25%ofrespondentsreported thatthemoveto thecloud strengthened theirregulatorycomplianceand 23%of respondentsnoted ithelped driveinnovation and/ordigital transformation activities.
An aerospaceand defensemanufacturertold IDC thatitrealized itwasrelativelyeasyto migrateto itsprivatecloudERP systembecausemovingto a moremodern system preservesexistingprocessesand data and minimizesdisruption.In addition,themanufacturerwasableto embraceinnovation and tailornewinnovativesolutionsto gain operational efficiencies.
Privatecloudsoffermorerobustbest practicemodelswith standardized workflowsand enableorganizationsto focuson theindustry'struedifferentiation areas.Asthedifferentiatorsarebroughtforward in therequired workflows,the organization can remain compliant with industry regulations.Privatecloudsenabletheorganization to build in whatit needs Theseworkflowscan then beintegrated into theERPsystemeitherbytheorganization orbyitsservicesprovider. Organizationsmovingto privatecloudsfind iteasierto retiretechnical debtassociated with legacysystems while capitalizingon modern cloud systemswith moreinnovation,such asAIand ML.
In addition,organizationsmovingto privatecloudsfromlegacysystems find they can moveto a managed services providerwhileoutsourcingcumbersometechnical aspects.Thisavenuehelpsloweroverall total internal costswhilealso helpingthecompany moveforward with a bettersystem foritsindustry.

For regulated industries, how are compliance, security, and automation enabled in private clouds?

Theprivatecloud bringsfull control to theorganization forinfrastructure,applications,and data asitisstored in a dedicated cloud environment. Thisdedicated environmentisisolated fromotherorganizationssuch that a companycan still maintain strictcontrols,and differentbusinessunitsand departmentscan usewhattheyneed. Organizationsin regulated industriesfind theycan reap thereward ofmoremodern systemsand enablemoreautomated workflows withoutlosingcontrol.Byincreasingthecontroland visibilityofthedata,companiescan customizethesystemsas necessary,such asfor securitypolicies,and ensuretheirown protection.IDC's2023Worldwide Industry CloudPath Survey found that24.5%ofrespondentsimproved theirIT securityby movingto theprivatecloud.
When organizationsutilizecloud serviceproviderswith largedatacentersand services,often called hyperscalers,for their privatecloud hosting,theyfind robustsecurity,compliance,and automation.Thesecapabilitieslowertheburden onthe organization ofestablishingtheseposturesitself Privatecloudscan bringmuch morecost-effectiveservice-level agreements.Even ifa companyoutsourcesthehostingservices,benefitsfromthird partiesusuallyincludeguaranteed uptime,backup and recovery,disasterrecovery,and securityelements(physical/cyber).


How is AI impacting regulated industries, and what does this mean for ERP?

Let's firsttalkaboutthedifferentelementsofAI.TheseincludeML,deep learning,automaticspeech recognition and natural languageprocessing(NLP),and generativeAI(GenAI). MLalgorithmshavebeen builtinto applicationsto automateworkflowand routinesand analyzedata.Deep learning(a subsetofML)leveragesneural networksto simulate human decision-making.NLPisused to enhanceinformation searchesand systemusabilitythrough textand speech data,enablingtoolssuch aschatbots.GenAIenablesusersto createnew contentin responseto shortprompts. RegardingGenAI,theleadingusecasesthatregulated industryorganizationsexperiment with includequality and complianceverification,strategicsourcing,regulatorycompliancesubmissions,contractcreation,and predictive maintenance.
In supportofthesevarioususecases,ERP can provide a platformfordata,applications,and analyticsthatsupports operation and decision-making.AIembedded into ERPsystemswill enablebetterprocessautomation,augmentation of skillsand knowledge,predictiveanalyticsforproactiveservice,and a reduction ofthecostofquality. Wealso seeAI playinga rolein improvingkeyfunctionssuch ascontractlife-cyclemanagement both thecreation and ongoing managementacrossthesupplychain.
Itisimportantto notethata hybrid public/privatecloud approach will beimportantforcompaniesin runningtheir business.Thecloud providesorganizationsin regulated industrieswith benefitssuch asimproved global collaboration and R&D,easier(and somewould saymoresecure) sharingofdata,and improved supplychain visibility. However,there arealso nativestoresofcompliance-related data,aswell asengineeringand R&D intellectual property,thatorganizations do notwantto sharein thepublicsphere.In thisageofadvanced AI,companieswill also wantto keep someAImodels privateand notexposed to outside, nonpartnerorganizations.

Connecting the aftermarket services process upstream to supply planning, contract management, and procurement is critical. Where are most companies in their maturity, and what are their expectations for modern ERP to support this?

Connectingtheaftermarketprocessto product,manufacturing,and supplyplanningand execution islargelyimmature in mostcompanies.Thereisa recognized need to "closetheloop" acrossthischain,buttechnology,organizational limitations,andinertia preventthisfromhappeningconsistently. Theend resultsaretwofold:servicedeliveryresponse timewill beslowand organizationsfail to learn overtimefromtheserviceand customerexperiences, meaningthey continuemakingthesamemistakes. Havinga closed loop to and fromcustomerand field servicesenablesa fast responseto opportunityorissue,leadingto a bettercustomerexperiencethatdrivesfuturerevenue.
Ascompaniesmoveto thenextgeneration ofERP,theserviceto supplychain integration becomesmoreprevalent.The expectation formodern ERPisthatdata modelsare connected,and decision supportisenabled through AI/MLfor enhanced predictive,planning,and forecastingcapability.In IDC's2024 Global Supply ChainSurvey,oneofthetop 3

prioritiesforsupplychainsoverthenext12 monthsisto improvesupplychain resiliencythrough bettervisibility. Complementingthis,onlinetransaction processingand onlineanalytical processingon thesameplatformprovidemore dynamicmaterial requirementsplanningand supplychain modelingcapabilities.In addition to AI,technologysuch as augmented realityand digital twins(2D or3D) enableorganizationsto workin a blended physical and digital way.This could beto visualizeproducts,assets, processes, and data in supportofenterpriseand ecosystemoperation.IDC'sFuture ofIndustry Ecosystems Survey showsa desireto applyvisualization to productand servicedevelopment,customer supportand issueresolution,trainingand education,andimproved customerexperiences.
About the Analysts

Mickey North Rizza, GroupVice President, EnterpriseSoftware
Mickey North Rizza is group vicepresidentfor IDC's Enterprise Software. She leads theEnterprise Applicationsand Strategies researchservice alongwitha teamof analysts responsible for IDC's coverageof nextgeneration of enterpriseapplications includingCXmarketingand sales technologies, digital commerce, enterprise assetmanagementandsmartfacilities,ERP, financialapplications,procurement, andprofessional services automationand related project-based solutions software
Jeff Hojlo, ResearchVice President, Future of Industry Ecosystems, Innovation Strategies, and Energy Insights

As ResearchVice President, Future ofIndustry Ecosystems, Innovation Strategies,and Energy Insights atIDC, Jeff Hojlo leads one ofIDC's Future Enterprise practices atIDC the Future of Industry Ecosystems. This practice focuses on threeareas thathelp create and optimize trusted industry ecosystemsand nextgeneration value chains indiscrete andprocess manufacturing, construction,healthcare,retail,andother industries:shareddataand insight, shared applications, and sharedoperations and expertise. Mr. Hojlo manages a group focusedon theresearchand analysis of the design, simulation, innovation,productlifecycle management(PLM),and service life-cycle management(SLM) market, includingemergingstrategies across discrete and processmanufacturingindustry such as productinnovationplatforms and the closed loopdigital thread ofproductdesign,development, digital manufacturing, supply chain,and SLM.

MESSAGE FROM THE SPONSOR
Streamlinegovernmentcontractingandsupportend-to-endcompliancewithinregulatedindustries.
Cognitusisan SAPGold Partnerspecializingin theimplementation,deployment,and supportofSAP solutions. They offera rangeof services,includingSAPS/4HANAintegration,RISE with SAP,cloud solutions,application management, and end-to-end digital transformation andarea go-to partnerforglobal governmentcontractors.TheirAI-powered solutions,co-innovated with SAP,help businessesmaintain regulatory compliance,data migration,streamlinecontract lifecyclemanagement,acceleratereal-timebillingand enhanceaftermarketprocessesto maximizevalueand gain a competitiveedge.
Formoredetails,pleasevisit:https://cognitus.com/
IDC Research, Inc.
140 Kendrick Street
Building B
Needham,MA02494, USA
T 508.872.8200
F 508.935.4015
Twitter @IDC
blogs.idc.com
www.idc.com
This publication was produced byIDC Custom Solutions.The opinion, analysis,andresearchresults presented hereinare drawn from more detailedresearch and analysis independently conductedand published byIDC, unless specific vendor sponsorshipis noted. IDC Custom Solutions makes IDC content availableina widerange of formats for distribution by various companies. Alicense to distribute IDC content does not implyendorsement of or opinion about thelicensee.
External Publication of IDC Information andData AnyIDC information thatis to be used in advertising, press releases, or promotional materialsrequires prior writtenapproval from theappropriateIDC Vice President or CountryManager. A draft of the proposed documentshould accompanyany suchrequest. IDC reserves theright to deny approval of external usage for anyreason.
Copyright2024 IDC.Reproduction without written permissioniscompletely forbidden.



Preserving Organic Growth in the Wake of M&A
A Strategic Imperative
Preserving Organic Growth in the Wake of M&A: A Strategic Imperative
The government contracting landscape is entering a period marked by accelerated mergers and acquisitions. In this environment, leadership teams are making critical decisions not just about whom to acquire but how to preserve value after the deal closes. Yet too often, the integration playbook emphasizes systems and synergies at the expense of the very lifeblood of future growth: a realistic, high-probability od win (pWin) pipeline.
Organic growth the disciplined development of winnable opportunities is frequently the first casualty of M&A. Deals get paused. BD teams are restructured or realigned. Leadership’s attention shifts to integration KPIs. But what if the most important driver of your valuation and future wins is what’s being neglected?
Experienced buyers increasingly recognize that due diligence doesn’t stop with the balance sheet. The real question is: Can the target company actually win the work it says it can? That’s not a question accounting teams can answer. It requires a nuanced evaluation of pipeline fit, strategic alignment, and opportunity realism.
Further, newly merged entities often overlook a key opportunity: the ability to recombine qualifications, past performance, and contract thresholds to compete for larger, more complex deals than either firm could alone. Doing this effectively takes more than awareness—it takes methodical, experienced pipeline engineering and rigorous, intelligence-led validation.
Firms that succeed in today’s environment are those that bring rigor to both sides of the growth equation: vetting the pipeline before the deal, and preserving momentum after it. Those who assume growth will resume organically post-close often find themselves months behind, chasing stale opportunities with a disoriented team.
In this market, the message is clear: the most valuable asset you acquire may not be the company itself but the quality and winnability of its next 24 months of growth.

This insight is brought to you by the d8 Group Pipeline Authority & Organic Growth Experts.
Established in Washington, D.C. in 2018, d8 Group supports strategic growth in the federal contracting space with unmatched rigor and real-world expertise. Our proprietary IQ²B © Process (Identify, Qualify, Quantify, Bid) has helped over 135 government contracting firms build and execute highfidelity pipelines pre- and post-M&A.
Contact us:
703.214.2025
Info@d8group.com
d8group.com

AI’s Transformative Role in Driving Better Project Outcomes
With challenging market trends impacting government contracting over the past few years — along with skyrocketing inflation, the pressure to avoid a recession and uncertainty amidst the new administration — contractors have responded by reprioritizing their strategies to become more competitive, win more business and grow. With the focus on looking inward and trying to find as much efficiency as possible, it’s no wonder they’re looking to AI and automation to streamline their operations. By optimizing processes that cut labor costs and improve performance on contracted projects, businesses can position themselves competitively with proven past performance, increasing the likelihood of winning future business.
In 2025 and beyond, AI will continue to reshape the landscape for project-based businesses.
Some of the most significant impacts will be seen in the following areas:
Predictive Analytics
With the rapid pace of technology evolution, the focus will shift beyond basic automation to predictive intelligence that supports every aspect of the project lifecycle. During the business development phase, AI can help search for opportunities, identify best-fit leads, understand the competitive landscape and find teaming partners, enhancing efficiency and ensuring resources are allocated wisely. Beyond business development, AI helps forecast outcomes throughout the project lifecycle. It can also automate complex workflows that are often more manually intensive and prone to human error.
Leveraging machine learning algorithms, contractors can anticipate customer demand, optimize project staffing and fine-tune supply chain operations. This foresight helps make proactive decisions, ensuring the organization remains competitive and responsive to market changes. AI technology also provides the ability to forecast project risks, estimate resource requirements and optimize budget allocation. This, in turn, enables contractors to make more informed decisions and mitigate potential challenges proactively. AI-enabled decision support systems can also simulate different scenarios, evaluate potential outcomes and recommend optimal courses of action based on predefined objectives and constraints.
By leveraging predictive intelligence, government contractors can improve efficiency and drive better business outcomes across missioncritical areas of operations, including procurement, project management and strategic planning initiatives.
Enhanced Security & Compliance
AI is poised to revolutionize the security landscape. By continuously monitoring for potential threats and ensuring adherence to regulatory requirements, AI will bolster security, trust and pave the way for project-based businesses’ more secure and resilient future.
AI-powered risk detection systems can monitor financial transactions, project timelines, and regulatory compliance to identify issues like fraud, delays or non-compliance. Beyond this, predictive analytics models can forecast future risks based on historical data and market trends, enabling contractors to implement preemptive measures to mitigate potential threats. By integrating AI into risk management processes, government contractors can improve risk detection, enhance decision-making and protect against possible losses — ensuring the success of their engagement while remaining compliant and protecting the bottom line.
Next-Generation ERP
Integrating AI into enterprise resource planning (ERP) systems also represents a transformative advancement. It boosts their capabilities and allows organizations to automate routine tasks, derive valuable insights from data, predict future trends and make more informed decisions.
AI technologies can streamline tasks such as data analysis, document management and contract review, reducing manual efforts and time spent on administrative tasks. It allows for the rapid analysis of large datasets to identify trends, forecast demand and optimize resource allocation, enabling contractors to make data-driven decisions and improve strategic planning. In addition, AI-enabled chatbots and virtual assistants enhance communication and collaboration among team members and other stakeholders, providing real-time support and guidance. As a result, workers can find the information they need when needed, enabling them to work more effectively and efficiently.
Lastly, AI’s ability to streamline interactions, personalize user interfaces and provide contextual assistance creates more intuitive user experiences, making it easier for users to navigate and access the information they need. With AI-driven features such as predictive insights and natural language processing, modern ERP systems can offer a much more interactive and engaging experience.
While the future of AI is yet to be fully realized, its significance is evident, and it will continue to serve as a significant factor in government contracting growth and innovation. However, as this technology evolves, it’s crucial to consider how to utilize it best and ensure its effectiveness within your organization.
Key Considerations to Keep in Mind
As AI helps organizations navigate the complexities of the government contracting landscape, some key factors must be kept in mind to ensure successful implementation.
1
Security and Privacy
When deploying AI, government contractors must prioritize robust security measures to safeguard sensitive data, uphold privacy regulations and maintain compliance. Ensuring encryption, advanced access controls and data anonymization techniques are crucial for protecting information and preventing unauthorized access.
Cybersecurity Risks
To safeguard sensitive data and company IP, contactors should assess and mitigate cybersecurity risks associated with AI deployment, including vulnerabilities to attacks and data breaches. By choosing a trusted cloud provider, following cybersecurity best practices and implementing robust authentication protocols, end-to-end encryption and intrusion detection systems, contractors can safeguard AI systems from cyber threats and ensure the confidentiality of the data they manage.
4
2 3
In addition to fortifying networks and infrastructure, contractors also need to ensure that their employees can use AI productively and securely. As such, they must invest in training to ensure their team can manage and operate AI systems effectively. Investing in workplace development programs, thoroughly documenting processes and promoting cross-team collaboration can help bridge skill gaps, empowering employees to leverage AI systems to their full potential.
Subcontractor Oversight
Prime contractors who work with subcontractors to execute projects need to also ensure those subcontracting organizations are using AI responsibly. To do that, contractors must maintain oversight of their systems to ensure compliance with contractual requirements and regulatory standards. Establishing clear communication channels, conducting regular audits and enforcing accountability measures can help mitigate risks and ensure subcontractors adhere to established guidelines and best practices.
AI will undoubtedly continue to impact and significantly transform the government contracting industry. From streamlining processes to enhanced decision-making and increasing operational efficiencies, AI will help contractors reduce costs, better manage projects throughout the project lifecycle, drive more efficient and effective operations, and ultimately drive better project outcomes.
Power Your Business with Deltek
At Deltek, we leverage purposeful innovation and AI capabilities to power the project lifecycle for government contractors. Our AI-powered intelligent business companion Deltek Dela™, enhances productivity, accuracy and value from opportunity pursuit through contract closeout, making the project lifecycle smarter, more efficient and more effective for government contractors. Dela helps our customers streamline and automate various tasks and business processes while reducing manual intervention, minimizing human error and increasing overall project success.
Better software means better projects. Deltek is the leading global provider of enterprise software and information solutions for projectbased businesses. More than 30,000 organizations and millions of users in over 80 countries around the world rely on Deltek for superior levels of project intelligence, management and collaboration. Our industry-focused expertise powers project success by helping firms achieve performance that maximizes productivity and revenue. deltek.com



Maximizing Productivity in an Era of Constrained Resources
Practical Approaches to Meet the Federal Mission


Across the federal landscape, agencies face significant change management challenges. Senior leadership transitions bring far-reaching expectations, including shifting missions and priorities, resource reductions, and demands for greater efficiency and productivity. At the same time, the public expects essential services to be delivered seamlessly. To meet these demands, federal leaders must adopt a strategic, adaptable, mission-focused, and workforce-centric approach.
Government contractors can help their federal partners by providing project management expertise coupled with practical, executable solutions. This involves bringing proven tools and strategies to optimize organizational structures, streamline business processes, and strengthen workforce competencies. This article explores effective approaches to assess organizational and workforce capacity, improve operational efficiency, and implement missiondriven strategies needed to achieve goals in a resource-constrained environment.
Strengthen Strategic Planning & Assessment
Effective strategic planning begins with clearly defining and aligning mission and priorities to ensure resources are allocated efficiently. Organizations should communicate goals clearly and focus on high-impact projects that deliver the most value to the mission and the public while streamlining or eliminating lower-priority tasks. Conducting an organizational assessment that leverages analytics to assess program effectiveness and guide resource allocation supports sound strategic planning and decision making. Contractors can apply project management expertise to help federal sponsors design and implement a structured approach to conduct an effective assessment.
1. Define the Purpose and Scope
Clearly define the assessment’s objectives, scope, and key performance indicators to ensure a focused and measurable evaluation.
2. Gather Stakeholder Input
Design surveys, interview guides, and focus groups to engage leadership, employees, and external partners and gather diverse perspectives on organizational performance.
3. Collect and Analyze Data
Use quantitative and qualitative methods to assess performance metrics, workforce engagement, process efficiency, and benchmarking comparisons.
4. Identify Strengths and Weaknesses
Analyze findings to highlight key strengths, inefficiencies, and risks, using tools like SWOT analysis for a comprehensive overview.

5. Develop Actionable Recommendations
Prioritize improvements, create targeted strategies, and align recommendations with agency goals and resources.
6. Implement Changes
Establish a clear action plan with defined objectives, assigned responsibilities, and timelines to ensure successful execution of improvements.
7. Monitor Progress and Adjust
Utilize scheduled reviews, performance metrics, and stakeholder feedback, to assess progress and make strategic adjustments as needed to ensure continued effectiveness and alignment with organizational goals.
Helping agencies, design, facilitate, analyze and implement assessment results is a value-add that goes beyond task delivery—it’s about helping clients operate smarter and more strategically.
Maximizing Productivity in an Era of Constrained Resources
Practical Approaches to Meet the Federal Mission
Enhance Workforce Planning & Talent Assessment
In today’s dynamic environment, shifting mission priorities and evolving business demands require a broad and agile workforce. Effective workforce planning begins with defining strategic priorities and aligning organizational structures, processes, and resources for optimal results. A key component of this planning is assessing workforce capabilities to identify skill gaps, leverage strengths, and develop strategies that enhance alignment with mission needs and overall efficiency. A well-executed workforce assessment considers business and mission objectives, integrates environmental factors, and addresses the entire workforce lifecycle to drive meaningful results.

Contractors can play a valuable role in designing and executing comprehensive skills assessments. The process begins with developing a skills framework based on job roles, industry standards, and mission requirements. These frameworks help establish effective talent assessment strategies to identify organizational and individual capabilities, guide workforce development, and sustain a highly skilled and agile workforce.
The assessment process starts with an occupational analysis, incorporating input from Subject Matter Experts (SMEs). Various federal and private-sector competency frameworks can be leveraged to source relevant skills. The next step is selecting assessment methods and tools to evaluate competency requirements at both the organizational and individual levels. These tools may include self and supervisory proficiency assessments, structured surveys, and 360-degree feedback. Clear instructions should be provided to ensure accuracy and consistency in the assessment process.
Once the assessment is conducted, results must be aggregated, analyzed, and presented in a format that facilitates decision-making. Findings should inform a targeted action plan that identifies key skill gaps, recommends development strategies (such as upskilling or cross-training), and ensures workforce capabilities are better aligned with business needs. Periodic reassessment allows organizations to track progress, refine mitigation strategies, and recognize workforce achievements.


A properly executed talent assessment ensures that workforce capabilities align with organizational priorities, processes, and tools—ultimately strengthening mission readiness and operational effectiveness.
Maximizing Productivity in an Era of Constrained Resources
Practical Approaches to Meet the Federal Mission
Workforce Development: Building Capacity from Within
In a resource-constrained federal environment where hiring is constrained and employees are asked to do more with less, agencies must maximize the capabilities of the workforce through targeted, mission-driven training. However, traditional broad-based training programs can be costly and timeconsuming. To be effective, training must be strategically focused on developing skills that directly support mission objectives while fitting within budgetary and time constraints.
One approach is broadening employees by cross training them to take on new roles or responsibilities, ensuring operational continuity during staffing gaps. This can be accomplished by utilizing the skills assessment discussed above to identify overlapping competencies within teams and structuring training to be hands-on and directly relevant to day-to-day functions. Peer mentoring, job shadowing, and rotational assignments can supplement formal training at little to no cost while allowing employees to gain practical experience in new areas. Agencies can also leverage existing online training platforms, internal subject-matter experts, and knowledge-sharing communities to facilitate skill transfer without requiring significant new investments.

To address leadership shortfalls, agencies should focus on developing internal leadership pipelines via succession and knowledge transfer programs. This starts with identifying high-potential employees and equipping them with essential management skills before critical vacancies arise. Knowledge transfer tools, microlearning modules, leadership coaching, and structured mentorship programs can build expertise and understanding without pulling employees away from their responsibilities for extended periods.
Finally, as the federal mission increasingly relies on emerging technologies, agencies must prioritize upskilling employees in critical areas like artificial intelligence (AI), data analysis, and cybersecurity. Rather than outsourcing all technical expertise, agencies can invest in targeted, role-specific training that allows employees to integrate new technologies into their work. Free or low-cost training resources from federal partners, industry groups, and academic institutions can be leveraged to build these skills. Encouraging participation in interagency working groups, technology pilot projects, and professional development communities can also help employees stay ahead of evolving demands.
Contractors can help align training efforts with mission needs, identify cost-effective learning approaches, and help foster a culture of continuous learning. The goal is to continue to enhance workforce capabilities, improve operational resilience, and ensure long-term mission success, even in the face of resource constraints.

Peer mentoring and job shadowing empower employees to step into new roles and grow talent from within — building critical skills and leadership without added cost.

Maximizing Productivity in an Era of Constrained Resources Practical Approaches to Meet the Federal Mission
Foster Resilient & Engaged Workforce
In today’s turbulent federal environment, employees face growing uncertainty and a deep sense of underappreciation. The “deal” feels increasingly one-sided as they witness sweeping changes to programs and resources, along with the voluntary or forced departure of many colleagues. If leadership fails to address these concerns effectively, the consequences will be severe: declining performance, lower engagement, and a workplace culture that makes attracting and retaining top talent nearly impossible.
An effective communication strategy is the first step in navigating organizational uncertainty, fostering trust, and securing stakeholder buy-in. However, employee awareness, understanding, and acceptance are not achieved through a single message. Instead, they develop through continuous outreach and consistent messaging. A structured, timebased communication approach—starting with senior leadership and cascading through management to individual contributors—creates clarity, alignment, and a smoother transition.
Contractors can play a key role in developing a strategic communication plan by analyzing the current landscape, aligning the strategy with organizational goals, crafting effective messages, leveraging communication channels, and gathering and evaluating stakeholder feedback.
Once effective communication processes are in place, trust can be rebuilt and engagement strengthened through targeted initiatives like Employee Value Propositions (EVP) or similar tools. This begins with understanding what employees value most. For example, as employees transition back to the physical workplace, management can ease the adjustment with solutions such as improved space management and technology, access to fitness facilities, food and childcare options, and flexible work schedules. Simply seeking employee input demonstrates leadership’s commitment to their well-being.
For an EVP to be effective, it must be consistently applied, serving as a clear agreement between leadership and employees. Its impact can be measured through Federal Employee Viewpoint Survey results, as well as specially designed onboarding, sustainment, and departure surveys. Contractors can assist in designing and interpreting these surveys, ensuring engagement efforts are effective and recommending adjustments to address emerging issues and concerns.
Strategic Workforce Management as a Mission Enabler
The reality is that reduced staffing and shifting priorities are not temporary challenges—they are likely to be the norm for federal agencies in the years to come. But with proactive assessment, smart alignment, targeted training, and workforce care, agencies can meet their mission needs—even with fewer people.
For those of us in the government contracting community, helping our clients navigate these challenges isn’t just good partnership it’s essential to the success of the broader federal mission. Whether through supporting assessments, offering upskilling solutions, or helping agencies rethink workforce strategies, we can bring value that goes far beyond headcount.
Ultimately, optimization isn’t about doing more with less—it’s about doing what matters most, with the right people, in the right ways. And when we help our clients achieve that, we all succeed.



To ensure the fairness, transparenc!:J and accountabilit!:J that trust requires, !:JOU must understand the data driving an Al model. "You have to understand on which data !:JOU're getting outputs," Lee said, adding, "If the language model isn't tailored to meet !:JOUr needs, then !:JOU're just using the wrong product. This is analogous to other technolog!:J adoption phenomena, of course, but if !:JOU're not clear on the requirements, then !:JOU're inviting risks in a WO!:J that might prove dangerous. For Al adoption, !:JOu'II need to ensure that !:JOU're not giving OWO!:J IP rights, violating someone's privac!:J, or creating new liabilities for !:JOUr organization."
Manage tJOUr approach
Al technolog!:J, and the required risk management, can seem like a complex proposition for a small or midsized professional services firm. That's wh!:J some firms look for managed thirdpart!:J Al services where the risks and controls are alread!:J in place.
"It's common," Lee said. "A lot of our clients are SO!:Jing, 'We don't have the staff or infrastructure to handle this nuanced, complicated technolog!:J. So, can't we just find a vendor who can help us do this?"'
"While I understand that inclination, it's a dangerous formulation," Lee said. "The most dangerous part is the implicit part - the things that !:JOu're not considering before !:JOU start." Companies need to ask a series of questions before the!:J bring Al capabilities into their environment.
Isolation
"The first question to ask, perhaps above all the others, is: 'Are we operating within a walled garden?"' Lee said. Consider the developer's maxim that, "If !:JOU aren't pa!:Jing for the product, then !:JOU are the product" - most free services pa!:J their expenses b!:J monetizing the data and other information the!:J collect from users.
"If !:JOU just go to a free GenAI interface and enter proprietor!:) information, !:JOu're giving that IP awa!:J in WO!:JS that are prett!:) occult, hard to track and impossible to recover," Lee said. "You shouldn't blindl!:J trust vendors, especiall!:J the less-established ones, to alwa!:JS share the truth about that."
Stakeholders
While business areas must take ownership of their technolog!:J solutions, the!:J must also consult other areas before emplo!:Jing a new Al solution. Ever!:J Al solution needs to be accounted for within the organization's risk management and governance.
"Like C!:Jbersecurit!:J, Al governance done well is a multidisciplinar!:J approach," Lee said. "ldentif!:J the stakeholders who could be impacted b!:J this - like emplo!:Jees, customers, vendors, downstream providers and third-part!:) IP holders." Then, include HR, legal, finance, IT, operations, compliance and other teams, to identif!:J the use cases, relevant metrics, risks and other issues.
"Harkening back to the can't-we-just-hire-a-vendor for that commentar!:J, !:JOU can't outsource that," Lee said. "As !:JOU adopt the proof of concept, it's important that ke!:J stakeholder perspectives are contemplated for proper risk management. And !:JOU need to have an internal champion."
Human oversight
Your organization needs input from across the range of stakeholders for an Al solution implementation - but it also needs human oversight for the solution's outputs.
"If !:JOU look at all of the anecdotes of Al technolog!:J gone wrong, it's the absence of a human overseer ever!:J time," Lee said. "You need to have a human overseer to make sure the solution is telling the truth, and confirm that it's providing utilit!:J as opposed to sending !:JOU down some expensive liabilit!:J nightmare."
Third-part!:) Al services come with the same warning as thirdpart!:J C!:Jbersecurit!:J services: You can outsource the solution, but !:JOU can't outsource the risks; those belong to !:JOUr organization, so the!:J must be contemplated and addressed b!:J those within !:JOUr organization.
The risks are still !:JOUrs.
"I think that's the most essential insight," Lee said. "You can have advisors help identif!:J a proof of concept, but if !:JOU don't dedicate someone to oversee the outputs - to be accountable for the qualit!:), consistenc!:J, and reliabilit!:J of same, it can lead to a ver!:J bad turn of events. Not onl!:J can !:JOU not give the S!:JStem itself a long leash, !:JOU should never let it completel!:J off the leash."
When !:JOU understand !:JOUr Al opportunities and risks, then !:JOU can form the risk management and governance framework !:JOU need. This framework will help bridge the risks between toda!:J's needs and tomorrow's Al opportunities.
Contacts
Frederick Kohm Head of Services lndustr!:J & Principal, Risk Advisor!:) Services
Grant Thornton Advisors LLC
T +1 215 376 6040
E frederick.kohm@us.gt.com
Johnny Lee Principal Risk Advisor!:) Services
Grant Thornton Advisors LLC
T +1 404 704 0144
E j.lee@us.gt.com


Digital Twin Trust
[2] R. Capozzi, A. Costa and I. Friedrichs, "Creating Virtual World Environments for Ocean Vehicles," in
1. V&V Project Overview
Digital twinsarea virtual representation of a physical asset, its environment, and its processes. They are increasingly employed by companies looking to decrease design, schedule, and cost risk by rapidly iterating through use cases of an asset in its virtual form. As with any engineering approximation, digital twins are not perfect substitutes for reality and there is a constant balancing of risk when using outputs from a digital twin to make real world decisions. To build trust, a validation and verification framework is proposed in [1] that can be applied to any digital twin. The goal of this document isto demonstratetheapplication of that framework to an existing digital twin used by Unmanned Systems (UxS), a business group within HII’s Mission Technologies division.

2. HII Unmanned Systems Digital Twin Overview
UxS designs, builds, and sustains unmanned underwater vehicles (UUVs). Due to the complexities of designing robotsto withstand harsh ocean environments, aUUV digital twin (DT) has been developed to aid in the decision-making process. The DT is a physics-based representation of a UUV, its processes, and its environment [2] The DT user can vary those representations to obtain useful output that would otherwise be prohibitively time consuming, expensive, or impossible to generate in the real world.
The UxS UUV DT iscomprised of four major components: (1) the virtual world, (2) the vehicle model, (3) sensor models, and (4) the hydrodynamic physics solver. The virtual world is populated with bathymetry maps, ocean turbulence, water density profiles, sea state effects and physical obstacles in an integrated framework built on Unreal Engine and supporting processing nodes. The vehicle model is represented by the hydrodynamic hull signature and actuators populated from a suite of high-fidelity computational fluid dynamics(CFD) simulations. This vehicle is outfitted with simulated sensor models developed as mathematical representations of the sensor functions with specifications applied based on vendor datasheets and real-world data. Thesimulated vehicle interacts with the virtual world through its sensors and hydrodynamic properties. The high-fidelity physics solver uses these interactions to solve a set of non-linear equations integrated in time for the dynamic vehicle state. This DT is developed in a very modular framework that allows models to be used when needed and combined to represent various vehicle configurations. Each model publishes its output information onto the data distribution service (DDS) busfor any other model to subscribeto. The following sections will detail some of the significant DT components and processes through the ontology presented in [1]
2.1. Model Library
The model library is a collection of model elements that can be combined to represent the digital twin. For the UxS UUV DT, models are essentially the discretization of the real world into its virtual, mathematical counterparts. Each model has a distinct purpose in the virtual world and when combined, form a complete representation of the UUV. Models include the physics solver, propeller, depth sensor, GPS, and visualization system.
Figure 1: Simulated small- and medium-class UUVs operating in HII's UUV Digital Twin over local bathymetry
The fidelity of each model varies, both in softwarecomplexity and digital twin classification level. Many of themodelsare classified as digital twin prototypesbecausethey do not exchange information with the corresponding real-world systems. Other models have modes to allow connections to real hardware and are an example of a complete digital twin. How the models are linked together and deployed influences the overall complexity of the DT.
2.2. Digital Twin Configuration
A digital twin configuration isaset of modelsthat are linked in a certain way to run a simulation. Due to the modular nature of the DT, many configurations can be formed depending on the use case. The configuration referenced throughout this report is Software-in-the-Loop (SIL), which links the digital twin with a processor running vehicle control software through communication protocols like ethernet and serial.

2.3. Use-Case for this Report
When performing validation and verification on a digital twin, the use case is an essential aspect to establish up front, as it has a large impact on whether the V&V activities provide acceptable results. Exampleusecases for the UUV DTinclude use as a testbed for developing autonomy algorithms,training autonomy algorithms, troubleshooting hardware, feasibility studies for new intended vehicle operations, or operator training. These would each pose very different V&V tests and success criteria.
This paper will discuss the V&V for the digital twin of one of HII’s small class UUVs when used for design and testing of the vehicle control software. This DT applications uses a SIL configuration introduced in Section 2.2 The V&V of individual models will be discussed in the context of the overall digital twin, as its modular nature lends itself to activities on a model-by-model basis.
3. Verification and Validation Processes
This section will cover the process implemented by the Modeling and Simulation team to complete validation and verification activitiesduring thedevelopment and testing of thedigital twin, following the framework laid out in [1]. The result of this process is a V&V report, one per individual model and one per DT configuration use case.
3.1. New Feature Identification & Use-Case
When a manufacturing contract is established, the modeling and simulation team works closely with the systems and software engineering teams to determine the digital twin requirements. New models or upgrades to existing models are identified to support the new vehicle variant. Requirements and development aretracked using Jamaand Jira(workflow and requirements management) tools and linked to the final V&V report created by the team.
For the test case discussed in this report, the digital twin must mimic the dynamics, processes, and environment of a small class UUV. If the digital twin data is proven to accurately represent the real
Figure 2: Software-in-the-Loop Architecture
vehicle’smotion through theenvironment and all feedback from sensorsand processes, thesoftware team (DT users) can use the digital twin to test vehicle control code in the virtual environment. Table 1 contains a single requirement as an example for this use-case. These requirements are the result of discussion between the DT Users and the DT developers.
Table 1: UUV Digital Twin Requirement Example for Vehicle Control Software Testing Use-Case
Requirement
UUV DT shall capture similar steady state flight to actual vehicle.
Acceptance Criteria Metrics
Pitch response percent error as compared to sea data is lower than maximum threshold percent error required per vehicle specifications.
Pitch response from the simulated vs actual vehicle log files.
The requirements for individual model V&V activities can differ in format from the use-case requirements. Many sensor requirements are driven by vendor specification sheets and communication protocols with vehicle control code, while actuator requirements are based on higher order modeling inputs. In the next phase of the report, the validation of an individual model will be discussed.
3.2. Criticality
Before beginning the validation and verification activities, a criticality assessment is completed for each model or digital twin configuration. This determines how rigorous the V&V process should be. The assessment looksat variouscriteriaabout themodel’simportancein theoverall digital twin configuration, such as number of other modelsitsdataimpacts, theeffect that model failurehas on the overall outcome, and whether similar data is available or has been used for decision making in the past.
The resulting consequence level is combined with the results integration level and supplementary information availability level to calculate a model influence level. This is required to determine overall criticality of themodel or digital twin configurationand thusdetermine the magnitude of V&V required.
3.3. Conceptual Model Validation
Conceptual model validation answersthequestion of whether the theories and assumptions used to create the model are correct and if the model representation of the real-world component is reasonable for the use-case. The goal of this V&V activity is to document this information before and during model development in a standardized format, as both V&V report content and a resource for future work. The conceptual model validation also clearly boundstheproblem spaceand use-case of each model, providing hard evidencewhen discussing applications of the overall digital twin. The UUV Fin model is presented below as an example case for the conceptual model validation process.
3.3.1.
Actuator Model
A UUV’s finsand propeller arethe main actuators of the UUV. The fins are a specialized model in the UxS DT because the model content is largely driven by computational fluid dynamics (CFD) modeling completed based on the specific fin dimensions. This CFD output drives the lift and drag coefficients (Cd and Cl) based on angleof attack (AoA) that feed into theUxS DT fin model, along with lift and drag force equations. These model development inputsarecompleted once and inform the developers who are writing the model. A different set of inputsisrequired to run the model. These include fin position commands, vehicle velocities, rates and wave elevation and density from the environmental state. The outputs from the model include lift force, drag force, and induced moments.
3.4.
Model Verification

The next process implemented aspart of theUxS DTframework is model verification, which determines if themodel implementation sufficiently satisfies the requirements of the use case. Model verification is also documented as part of the individual model V&V process for the UxS DT.
3.4.1
Software Quality Assurance
The UxS Modeling and Simulation team is the primary digital twin developer and is responsible for conducting softwarequality assurance. At the beginning of the model development process, a new ticket is created in Jira and assigned. This ticket serves as a compilation platform for any information, bugs, anomalies, and testing sequences during the model development. The team also uses GitLab for configuration management of thedigital twin sourcecode. Individual branches are used by developers to make and test any new model, before merging work into the main branch. A detailed code review is completed at the merge stage, and approval is needed from two team members before any change is accepted. Additionally, tagged versions of the digital twin code base are created and saved after major changes. These Jiratickets, GitLab merge requests, and any other development work are included in the model V&V document as part of model verification.
3.4.2
Numerical Algorithm Verification
The modeling and simulation team is also responsible for numerical algorithm verification, which determinesthecorrectnessof thealgorithmsimplemented in themodel code. Theverification rigor in this category scales depending on the model complexity. Models like a GPS or a depth sensor implement straightforward formulas, therefore automated unit tests are sufficient for verification. Unit tests are implemented for all models as baseline testing that output simple binary pass/fail reports.
The UUV fin model outlined in Section 4.3.1 has a more complicated formulation, so the developer also conducts integration tests using the autopilot configuration to verify the model. An example mission includes a sandy flat seafloor, no environmental effects, and commands the vehicle to fly a box or lawnmower pattern. Visual inspection is used to verify that the vehicle dives, surfaces, turns left, and turns right when commanded to do so. Evidence from the unit tests and baseline autopilot missions is gathered and submitted as part of each model’s V&V document.
Figure 3: UUV Fin Inputs, Outputs, and Model Content
3.5. Results Validation Activities
The last process conducted as part of theUxS V&V activitiesisresult validation, which determines if the digital twin isan accurate representation of thereal-world problem.A key component of resultsvalidation is comparison with real-world data, so thepredictivecapability of thedigital twin can beunderstood. This process iscompleted by theDT developer, but input may be needed from SME’s and the DT users when performing analysis.
At thispoint, thefocusshiftsfrom theindividual model level to theoverall digital twin configuration. The entiredigital twin must beexercised and compared to real world datato determine if it meets the use case criteria. For theexampleusecase, thedigital twin is used to test the small class UUV control code in the virtual environment. Since ample sea data exists for the UUV, a software-in-the-loop mission can be completed that mimicsconditionsfrom thereal world. By comparing the outputs from the two missions, the digital twin’s predictive capability is established.
The general SIL validation process can be broken down into three major steps. The first is establishing communications between the vehicle control software and the virtual sensors and actuators running as part of the digital twin configuration.

During this step, any errors in the sensor messaging format can be identified and fixed. Once the vehicle control code is working with the digital twin code, the mission starts. At this point, any major dynamics discrepancies are analyzed real time. For example, inability to diveor major pitch oscillationswould merit a pause in simulation and for the developers to revisit the digital twin models to identify bugs. Once the mission is completed, detailed post mission analysis (PMA) is conducted. The PMA determines the dynamic performance of the digital twin vs the realworld referent data and drives the conclusions about the digital twin’s predictive capability.
4. V&V Recommendations
Lastly, thedigital twin use-case requirementsarerevisited, and formal recommendationsaredocumented. Based on thedata generated by thevalidation and verification activities, an assessment is made about the use of the digital twin. Any requirements that are not achieved are discussed between the DT User/Sponsor and theDT Developer for thespecificuse-caseto identify error sourcesand next steps. This can often lead to updates to individual models in the DT or a reduction of the use-case scope.
5. Conclusion
Whileno digital twin can ever be aperfect representation of thereal world, the goal of this framework is to providestructured, reasonableguidelinesfor interpreting digital twin outputs and clearly understanding when they are applicable. One of the most important aspects of any engineering work is communicating the results with project sponsors and stakeholders. The individual model and integrated digital twin use case V&V reports that are generated by the modeling and simulation team will be indispensable documentation for ensuring confidence in the ability to use the DT as a tool for vehicle design, integration, and test.
Figure 4: Example PMA Data Comparison Between DT and Referent Vehicle

Introduction
At Jefferson Solutions Group (Jefferson), we take pride in our ability to provide comprehensive, datadriven, and impactful assessments that support federal agencies in achieving their missions. This report, a detailed evaluation of healthcare services within the Bureau of Prisons (BOP), exemplifies the caliber of work we produce in collaboration with our strategic partners. The findings, analysis, and recommendations in this report are the result of a rigorous and methodologically sound assessment, demonstrating the expertise, insight, and dedication that Jefferson brings to every engagement.
The Power of Strategic Partnerships
This report would not have been possible without the collaboration between Jefferson and the National Academy of Public Administration. Our ability to leverage strategic partnerships allows us to combine subject matter expertise, research capabilities, and deep industry knowledge to deliver solutions that address complex challenges. By working together, we provide federal agencies with comprehensive insights, practical recommendations, and actionable strategies that drive meaningful change.
Demonstrating Excellence in Federal Consulting
Healthcare in the correctional system is a critical issue, affecting not only incarcerated individuals but also broader public health outcomes. With nearly 158,000 Adults in Custody (AICs) across the BOP, ensuring consistent, high-quality, and cost-effective care is essential. Our evaluation highlights both the strengths and challenges of the BOP’s healthcare system, providing targeted recommendations for improvement in areas such as staffing, resource allocation, specialty care, financial management, and technology integration.
A Commitment to Impact
Beyond the immediate findings of this assessment, this report underscores Jefferson’s broader commitment to driving impactful change within the federal government. We understand that agencies face evolving challenges that require forward-thinking solutions. Whether addressing healthcare in correctional facilities, improving operational efficiency, or enhancing program management, Jefferson remains dedicated to delivering results that matter. As you read this report, we encourage you to see it not just as an assessment of a single system, but as an example of the level of expertise and quality Jefferson brings to all of our engagements. Our work is defined by analytical rigor, strategic insight, and a commitment to excellence values that guide us in supporting federal agencies in their most pressing initiatives.
For agencies and organizations seeking a partner that delivers high-impact, high-quality solutions, Jefferson is ready to help navigate complex challenges and achieve transformative results.
Executive Summary
Purpose
This report provides an independent assessment of the healthcare services provided by the Bureau of Prisons (BOP) to Adults in Custody (AICs). Conducted over one year by Jefferson and the National Academy of Public Administration, the study evaluates BOP healthcare practices against community standards. The assessment aims to identify strengths, challenges, and opportunities for improvement within the BOP healthcare system to enhance service delivery, ensure patient safety, and improve health outcomes.
The Bureau of Prisons Healthcare System
The BOP provides healthcare to approximately 158,000 AICs across the U.S., Hawaii, and Puerto Rico. The Health Services Division (HSD) manages medical, dental, social work, and mental health services for federal AICs across 121 facilities. HSD’s budget accounts for $1.46 billion annually, about one-sixth of the total BOP budget. The cost of AIC healthcare increased by 23% from $615 million in 2017 to $800 million in 2023, demonstrating the growing financial strain associated with delivering medical care in a correctional environment.
Several factors challenge the BOP healthcare system. Nearly one-third of AICs are over 46 years old, making them more vulnerable to chronic illnesses that require ongoing care. Additionally, about 72.8% serve sentences of five years or longer, leading to increased long-term healthcare demands. As 97% of AICs eventually return to society, the quality of healthcare they receive in custody directly impacts their reintegration and community health. Many AICs enter the system with significant health disadvantages due to socioeconomic factors and lack of prior access to healthcare, necessitating a robust continuum of care to improve long-term health outcomes and facilitate successful reentry. Addressing these issues requires a strategic approach to staffing, resource allocation, and process efficiency.
Study Overview
We assessed healthcare practices, focusing on medical and mental health processes, utilization review, and telemedicine. We evaluated operational effectiveness and service gaps.
Findings Overview
The study evaluated six key healthcare domains: safety, effectiveness, patient-centeredness, timeliness, efficiency, and equity. The methodology included literature reviews, data analysis, comparative benchmarking, and nearly 400 interviews across 12 institutions, including 170 AIC interviews. Key limitations included the study’s focus on only 12 of 121 facilities, limited access to population prevalence data, challenges in securing interviews with regional leadership, and operational disruptions such as lockdowns, which constrained the assessment’s scope.
Key Findings and Recommendations
1. Staffing and Resource Allocation
BOP employees demonstrate strong commitment and adaptability in providing care, yet staffing levels and recruitment strategies do not align with increasing healthcare demands. Many employees take on multiple roles, straining their ability to deliver consistent care. The study identified that healthcare staffing shortages lead to delays in treatment, increased workloads for existing personnel, and inefficiencies in service delivery. A lack of specialized recruitment efforts further exacerbates these challenges.
The demand for healthcare services has increased due to the aging prison population and the high prevalence of chronic diseases among AICs. Despite this, BOP facilities struggle with recruitment and retention of medical professionals, particularly specialists such as psychiatrists and chronic disease specialists. Overworked medical staff often find themselves unable to meet the needs of patients in a timely manner. Implementing staffing -to-patient ratios aligned with care level requirements, deploying a specialized HR team for healthcare recruitment, and expanding the use of paraprofessionals to support clinical employees would help address these issues. Expanding the recruitment pipeline through partnerships with medical schools, telehealth networks, and temporary staffing agencies could also alleviate workforce shortages.
2. Specialty Populations and Behavioral Health
BOP staff work diligently to provide care despite resource constraints, but there is limited availability of specialty care, integrated healthcare services, and trauma-informed practices. The lack of standardized training further impacts the effectiveness and equity of care. Specialty populations, including those with chronic conditions, disabilities, and mental health disorders, require targeted interventions.
Behavioral health services are a critical component of correctional healthcare, as many AICs enter the system with significant mental health needs. Yet, facilities frequently lack adequate psychiatric care, therapeutic programs, and crisis intervention resources. Mental health services are often reactive rather than preventive, with AICs receiving care only in crisis situations rather than through continuous monitoring and therapy. The study recommends broadening the range of mental health professionals, expanding trauma-informed care practices, and providing targeted mental health training for providers and correctional officers. Additionally, establishing community outreach and peer support programs can support post-release healthcare continuity, reducing recidivism rates and improving long-term health outcomes for AICs.
3. Standardization and Integration of Care
The BOP has made strides in process efficiency through innovations like medical buses and multidisciplinary teams, yet inconsistencies in screening, triage, and emergency protocols contribute to variability in healthcare quality. Fragmented healthcare delivery affects safety and effectiveness, leading to disparities in treatment availability across facilities. Establishing system-wide protocols for treatment and diagnosis can mitigate inconsistencies and enhance the efficiency of healthcare operations.
Standardizing screening and triage processes, integrating medical, dental, vision, and mental health services, and ensuring consistency in healthcare equipment and procedures across facilities would improve patient safety and care equity. Additionally, the expansion of telemedicine services and the
adoption of electronic health records can enhance integration efforts by allowing providers to share patient data across institutions, reducing redundancies and ensuring continuity of care.
4. Financial Management and Cost Projection
BOP has maintained operational efficiency despite financial constraints through resourceful approaches such as national prime vendors for pharmaceuticals. However, inefficiencies persist in medical services contracting, bill adjudication, and financial reporting. The lack of integration between healthcare data and financial systems complicates cost tracking and budget management, limiting the ability to predict future costs accurately. The costs of healthcare within the correctional system are rising, and without improved financial oversight, these expenses will continue to escalate. Shifting toward value-based care contracts, standardizing utilization management data collection, and improving financial reporting processes would enhance resource management and decision-making. The adoption of financial oversight mechanisms can prevent budget shortfalls and optimize resource allocation. Additionally, reviewing high-cost treatments and negotiating bulk pricing agreements with pharmaceutical and medical suppliers could reduce spending while maintaining quality care.
5. Electronic Systems and Data Management
Internal dashboards have improved data tracking and decision-making, but the Bureau Electronic Medical Record (BEMR) lacks modern features and operates as a standalone system. The absence of a standardized electronic medical bed management system further limits effective resource allocation. The study recommends enhancing BEMR by integrating clinical support tools, improving interoperability with other internal systems, and implementing a medical bed management system to ensure efficient resource allocation. These improvements would streamline medical documentation, reduce administrative burdens, and enable data-driven decision-making across facilities. Improved datasharing capabilities could help healthcare staff access patient histories more quickly, ensuring that AICs receive appropriate and timely care.
6. Medical Equipment Management
While institutions take proactive steps to enhance patient safety, inconsistent medical equipment management results in frequent breakdowns and inefficiencies. Poor inventory control and irregular maintenance schedules disrupt healthcare delivery, delaying critical treatments. Implementing a comprehensive medical equipment management plan, including rigorous inventory control and regular maintenance schedules, would improve service reliability and resource utilization, ensuring that AICs receive timely and high-quality care. Leveraging new technology such as asset tracking systems could further enhance inventory accuracy and prevent shortages or misallocation of resources.
Closing and Next Steps
The BOP healthcare system demonstrates resilience but faces significant challenges in staffing, resource allocation, financial management, and technology integration. Addressing these systemic gaps is essential to fostering a culture of care that meets AIC needs and improves public health outcomes. Ensuring that healthcare reforms are sustainable and aligned with best practices will require ongoing commitment, investment, and collaboration between BOP leadership and external stakeholders.
Converged Cyber AI
A Paradigm Shift in Cybersecurity


THE ADVANCEMENT OF GENERATIVE AI capabilities presents enormous potential for modernizing government operations but it also introduces new security gaps. While automation allows developers to move from concept to minimum viable product faster than ever, adversaries are developing similar AI-enabled techniques to discover and exploit security vulnerabilities.
To maximize generative AI benefits while minimizing threats, cybersecurity solutions must leverage AI as a first thought, not an afterthought. An AI-centric approach will enhance agencies’ abilities to both identify novel attacks and prepare to defend against them. To that end, Leidos is developing innovative converged cyber AI solutions with broad applications across government, from digital modernization to enhancing cyber-physical systems.
Generative AI for government
“The phrase ‘transformational technology’ is often overused,” says Bobby Scharmann, principal investigator for converged cyber AI at Leidos. “But with generative AI, it’s really appropriately used — there’s broad applicability in virtually every sector of our culture, from music and education and movies to improving the way humans and machines interact with each other.”
In the government space, generative AI has the potential to help solve the constant need to do more without an increased budget. Automated processes give human experts more time for deeper analysis and innovation. While fears around being replaced by AI have contributed to an often binary view of AI — that a task is either manual or entirely co-opted by AI — it’s more of a spectrum, with augmented decision-making as a sweet spot.
“Every analyst needs ways to help them get through the noise to the important information specifically relevant to them, so that they can make decisions instead of spending 95% of their day sifting through data and searching for what’s relevant,” says Robert Allen, a research scientist and solutions architect at Leidos.
In the software development lifecycle, the augmentative capabilities of generative AI can help developers write new code faster than ever, greatly increasing speed to delivery. However, research has also found that code
written with AI assistance tends to be buggier and less secure than traditional methods.
“It’s an important consideration that while you’re potentially putting out more systems faster, they are inherently less secure in some areas,” Scharmann says, “unless you have a security-first mindset throughout the development process.”
A cybersecurity paradigm shift
For Leidos, this mindset represents a paradigm shift. Since the cybersecurity field was established decades ago, solutions have largely been increasingly advanced versions of the same rule-based or signature-based methods. Though they’ve grown more complex and layered over time, there are still gaps that can be exploited, which then must be filled with new heuristic updates.
Generative AI, and its ability to evolve and adapt in ways rule-based methods can’t, offers gamechanging possibilities, but government leaders are understandably proceeding with caution. Industry partnerships offer reassurance and guidance as agencies move into uncharted territory.

Rule-based firewalls and other protections are familiar and predictable, “but what we need to combat an evolving AI-driven attack is a defense that evolves over time with AI. However, you don’t actually have full control over what the AI-enabled defense is,” says Meghan Good, senior vice president for technology integration at Leidos. “And that lack of visibility and explainability feels risky. We’re helping customers work through that risk.”
Given the sensitivity of government data, systems and infrastructure, technology leaders must take care in adopting new technologies. Leidos offers the benefit of innovative proving grounds and testing capabilities to hone new solutions before deployment. The key to developing cutting-edge converged cyber AI solutions is establishing learning environments that focus on: 1) all layers of security — perimeter, network, endpoints — and 2) offensive and defensive perspectives, or purple-teaming.
“Those two perspectives working together is really what strengthens the capability,” Allen says. “You might think that you have a good defensive product, but if you’re not actually testing that with an evasion capability, then you can’t strengthen it. The whole point is to have a system that’s learning from its adversary and improving on both sides over time.”
This is where AI can help in proactively discovering evasion techniques and vulnerabilities. Then, leveraging the same techniques, you can reinforce a system’s defensive posture. Through adopting a first principles approach to AI, breaking it down to its most fundamental interactions and objectives, Leidos is developing dynamic solutions with broad applicability.
“If a given capability is developed for an enterprise IT system, that doesn’t mean that the same underlying principles and underlying models are irrelevant in an embedded low-SWaP environment,” Scharmann says. “The deployment environments are different, but the
underlying fundamentals and characteristics are often very much transferable.”
Getting into an attacker’s mindset is difficult, especially when they’re leveraging their own AI-driven tools and exploits at the same time. AI offers ways to enhance both offense and defense skills beyond human capabilities.
“Leveraging AI/ML techniques allows you to discover novel and innovative approaches that otherwise might be subject to the imagination of the person codifying your more heuristic-based rules,” Scharmann says. “It allows you to branch out beyond just their imagination.”
Novel solutions for sophisticated threats
Leidos teams are taking a variety of innovative approaches to converged cyber AI solutions with applications across enterprise IT, cyber-physical systems, Internet of Things devices and more.
Among their latest research and solutions:
Generating training data: High-quality data is essential to training AI models in cyber attack detection, but there isn’t always enough data available that is representative of a particular attack type. “Our capability allows us to generate synthetic data that is higher quality than existing state-of-the-art methods out there,” Scharmann says.
TabMT, recently highlighted in a paper published by the prestigious Conference on Neural Information Processing Systems (NeurIPS), can take a small sample of data and create unlimited additional tabular data. It can also anonymize and later scrub the input data sample while still adhering closely to it, which is particularly beneficial given the high level of sensitivity and confidentiality required for handling government data.
“The whole point is to have a system that’s in effect learning from its adversary and improving on both sides over time.”
Robert Allen
Research scientist & solutions architect, Leidos
Understanding networks in context: Current enterprise network audits fall short when it comes to contextualizing risks. Leidos’ AI-driven network exploration aims to improve such practices by identifying attack paths to key cyber terrain and the exploits attackers may leverage to traverse them, creating invaluable situational awareness. Identifying the easiest exploitation paths to sensitive network segments or resources highlights the greatest risks to the network and essentially creates a map that can be used in the event of a compromise to identify where an attacker might traverse next to access valuable resources.
“It’s like going from a list of, ‘we have all of these assets in our environment’ to ‘we fully understand what the relationships are between them right now and the changes that are happening to them over time,’” Good says.
Enhancing existing rule-based systems: Leidos uses AI to discover rule-set gaps and automate defenses against potential attacks. From an offense perspective, it augments human capabilities with thousands of highconfidence, logic-preserving bypass approaches, while from a defensive perspective, it provides automated patching beyond signature-based methods.
“We look at our perimeters differently and take an adversarial, evasive approach through what we currently have deployed, whether that’s different layers of firewalls or zero trust network access layers,” Good says. “What malicious activity is still making it through from an evasion perspective? And then how do we proactively update our alerting capabilities to better look for these evasive attempts and make sure we’re detecting things before they occur in the wild?”
Testing AI solutions in relevant environments: To provide a safe way to do evasion and defensive testing using new products and capabilities, Leidos developed CastleClone, a cyber range solution that enables organizations to create digital twins of their environments and networks. The clones can be used for everything from penetration testing and malware analysis to training employees and new product assessments, all tailored to the organization’s unique needs.
CastleClone and digital twins can help ease fears or uncertainties around AI adoption because they offer government leaders a low-risk, custom environment for seeing firsthand how new tools and methods play out.
Building on top of leading commercial AI: To further enhance exploration and development, Leidos also

“We look at our perimeters differently and take an adversarial, evasive approach through what we currently have deployed, whether that’s different layers of firewalls or zero trust network access layers.”
Meghan Good Senior Vice President, Technology Integration
partners with leading commercial technology providers. Accelerating innovation doesn’t mean creating every piece from scratch but rather leveraging the best of existing solutions to create new capabilities. For example, Leidos has teamed up with code intelligence platform Sourcegraph to introduce secure, generative AI-enabled software development tools to government customers.
Good also highlights a partnership with Moveworks, a chatbot solution powered by GPT-class machine learning models with a conversational interface that works across a variety of systems.
“It makes it so you can get to information that was previously in silos,” Good says. “A capability like this is able to federate queries and search across and make it so that your data is really actionable.”
Keeping pace with evolving threats
As agency leaders continue to explore ways to incorporate generative AI into their cybersecurity solutions, Leidos offers expertise and guidance on how to approach it in a measured and strategic way. While it may be a transformational technology, that transformation doesn’t have to happen overnight.
“As part of our trusted AI approach, we start by analyzing and figuring out what we can do to better
assist someone, then how we’re going to augment them in the future, all before going into more autonomous operations,” Good says. “By going through those steps, by using digital twins and proxies, you can understand what the AI is doing and evaluate the risks before deploying it into production.”
Cybersecurity will always be a challenge in government, but a commitment to exploring novel technologies and solutions makes that challenge manageable. Staying ahead of adversaries is critical and increasingly complex. Generative AI solutions provide the opportunity to augment capabilities beyond what humans can do alone, enabling faster development and more robust security measures.
“We constantly need to be evolving, because our networks are evolving every day,” Good says. “We need a more dynamic way to look at our environments, and that’s the capability that we’re building.”
Learn more about how Leidos can help your agency successfully implement generative AI.

GOVCON BUYER BEWARE, YOU NEED AI “DOMAIN AWARE”

If you want to maximize the performance of your generative artificial intelligence (GenAI) platform for government contracting (GovCon) business development (BD), you need it to be “domain aware.” A domain aware GenAI possesses “the capability to generate content that is contextually relevant and accurate within a specific field or area of knowledge, often referred to as a ‘domain.’” This awareness transforms the GenAI from a generalist—albeit an amazingly powerful one!—into a generalist that also happens to be an expert in a particular discipline.
VALUE PROPOSITION FOR USING GENAI FOR BD
Having a domain-aware GenAI platform means you can count on the GenAI to respond to your requests in that specialty area with greater depth of knowledge, simpler prompts, and reduced potential for errors and incorrect assertions (known as “hallucinations”). The value proposition for using GenAI for BD focuses on two areas for improvement within your proposal processes:
(1) increasing bid quality by applying best practices and standards at every step in the process and (2) reducing time-consuming, labor-intensive tasks such as preparing content plans and drafting proposal content.
These improvements positively affect your company’s bottom line because they yield both efficiency—winning more through higher quality—and cost-effectiveness—bidding more by decreasing cost per bid. However, to realize these improvements, you need a GenAI system that “understands” the BD ecosystem and its terminology and best practices.
A FEW BASICS OF GENERATIVE AI SYSTEMS
To understand how GenAI systems can be made domainaware for GovCon BD, you need to understand some fundamentals of the large language models (LLMs) that are the writing engine of GenAI. While this may be an oversimplification, we can say that an LLM runs calculations based on the usage and relationships between words in its training data. The GenAI performs these calculations on the user input (called a “prompt”) and predicts a word that makes sense as continuation of the input. And the word after that. And the word after that…
Unlock the full potential of GenAI for GovCon business development with a domain aware platform that’s not only powerful, but expertly tailored to your industry—giving you the strategic edge you need.
DOMAIN AWARENESS MATTERS
What happens when you ask GenAI a question or make a request that requires specialized knowledge or terminology specific to a BD topic, such as contract clauses, the Federal Acquisition Regulation (FAR), or a customer’s Statement of Work (SOW) performance requirements? A generalist GenAI will answer to the best of its ability, based on the words and word relationships in its training data set. However, if the LLM’s training data lacks domain-aware content, the resulting answer will usually lack nuance and precision.
TECHNIQUES FOR MAKING GENERATIVE SYSTEMS DOMAIN AWARE
To make GenAI domain-aware in a specialized field like GovCon BD, the GenAI architecture must allow for injection of specialized content from that domain into its system. This additional content supplements the GenAI’s existing training data set with terminology and word relationships specific to the domain of interest. There are several strategies for infusing this domain-specific content:
§ Prompt Engineering. GenAI systems allow users to add supplemental content by expanding the context information in their prompt or, in some cases, by attaching entire documents.
§ Retrieval Augmented Generation (RAG). Some GenAI systems allow users to set up a library of documents that the GenAI platform can search for content relevant to the user’s request. Typically, the library contains proprietary and
otherwise sensitive documents in a secured environment. The searches use a technique called “semantic search,” which searches for content with meaning relevant to the subject of the prompt. That content is retrieved, added to the user’s request, and then input into the LLM. This approach incurs significantly less cost and effort than fine-tuning, but it may not achieve the same level of performance. A similar approach connects the GenAI directly to the user’s file management system via an Application Programming Interface (API), instead of using a dedicated library.
§ Contextual Embedding. Some GenAI platforms contain domain-specific algorithms and information built into the system. These algorithms respond to user prompts by incorporating additional information that is added to the prompt and fed into the LLM behind the scenes. Such platforms augment your prompt with relevant information from sample documents, doctrines, and regulations that were not present in the LLM training data set.
§ LLM Fine-Tuning. Some GenAI architectures allow finetuning, where AI scientists and engineers modify the LLM training data by incorporating additional data. For example, an LLM can be fine-tuned with proprietary documentation from a company’s past performance library. The additional data alters the calculations being performed inside the LLM, ensuring that every response to a user prompt is influenced by the LLM’s conditioning on the additional training data. Although this approach can yield particularly good results, it takes time and effort from AI specialists to do the periodic retraining of the LLM.
Figure 1 shows the overall GenAI input process and where the information is injected to make a GenAI domain aware.

AN EXAMPLE: DRAFTING A TRANSITION PLAN USING DOMAIN-AWARE GENAI
To illustrate the power of a domain-aware GenAI, let’s explore how each technique is applied in sequence to assist a proposal writer who is drafting a transition plan (sometimes called a phase-in plan) for a federal contract proposal. Transition plans are common proposal deliverables.
BUILD AND ENTER THE PROMPT
As shown in Figure 1, the process begins with the writer preparing a prompt, either manually or choosing one that is vendor-provided. Writers preparing their own prompt should use a best practice such as populating a template that identifies the user’s persona, the context of the request, the request itself, and an output format.
§ A persona for this example might be for the user to be a proposal writer and the GenAI to be a proposal writer’s assistant.
§ The context includes specific and relevant information related to the request, such as a transition plan outline provided by the proposal manager, along with the applicable proposal preparation instructions (PPI), proposal evaluation criteria (PEC), and the requirements that the transition plan must address. The context can also include business intelligence on customer hot buttons and competitor capabilities.
§ The request is a statement that directs the GenAI to perform a task. An example request for drafting a transition plan might be, “Write narrative for a transition plan that follows the outline, addresses all of the requirements, and incorporates numerous features that will score as strengths so the transition plan will receive a high score in proposal evaluation.”
§ The output format guides the GenAI on style, tone, readability, and output structure. For a proposal deliverable such a transition plan, an example might be, “The draft transition plan should be factual yet persuasive in tone and written at a 10th grade level of readability for an audience of government personnel experienced with proposal evaluation. The narrative should include and follow the outline and be no more than 1,000 words. When possible, the narrative should emphasize features of our offering that we expect to be different and superior to those of our competitors.”
Some GenAI platforms allow users to attach documents directly to the prompt. These documents could include the customer’s specific guidelines or regulatory documents needed for compliance. There is nothing inherently wrong with this approach, but for best performance it is vital that the information provided to the GenAI be specific and relevant to the request. Adding content that is not directly relevant to the request can degrade the response.
Some GenAI systems help users build personal and corporate profiles. Information from these profiles can be added to the prompt as well.
Once complete, the user enters the prompt into the GenAI chatbot (or equivalent) window. Some GenAI platforms are designed to minimize the time users spend writing and entering prompts. There is nothing inherently wrong with this provided the user is assured that the vendor-provided prompt is consistent with the user’s company’s best practices.
PERFORM RETRIEVAL AUGMENTED GENERATION
For RAG to be effective in helping the writer draft a transition plan, a secure document library needs to contain useful information such as transition plans that have scored well previously, actual proposal debriefs that detail the customer’s evaluation of transition plan strengths and weaknesses, and corporate standards for transition plans. RAG offers a significant advantage over fine-tuning, as it is considerably less expensive to maintain the library than to continuously update the training of an LLM.
The RAG search returns “chunks” of text—sections of the searched documents—that are closest in meaning to the prompt. Such content might contain transition plan features that scored well in customer proposal evaluations, transition processes and features that are industry best practices, and ideas for solutions to specific solicitation requirements. These chunks are prioritized so only the highest scoring chunks are added to the prompt.



APPLY CONTEXTUAL EMBEDDING
Once the prompt is entered and supplemented with information from RAG and internet search, the domainaware GenAI platform performs contextual embedding. Algorithms and information contained within the GenAI platform further supplement the prompt for a transition plan. An example of such information might be applying risk management methodologies from industry best practices to transition plan concepts and features present in the supplemented prompt, to add additional content on risks and mitigation.
LLM FINE-TUNING
Training the LLM on corporate documents such as highscoring transition plans and industry best practices enables the GenAI to create nuanced, precise responses. This enables the GenAI to respond to the supplemented prompt with a detailed response for critical components of the transition plan, such as schedules, roles and responsibilities, key personnel, etc. The fine-tuning ensures that the generated content is not only relevant but also expertly crafted because the information is fully incorporated into the LLM, which is the GenAI writing engine.
CONCLUSION
Contemporary GenAI tools bring an amazing ability to synthesize information and write fluently on a range of subjects, but they are limited by the content of their training data sets. This limitation can be overcome by injecting additional information specific to a subject area, such as GovCon BD, making the system domain-aware. As you evaluate a GenAI platform, whether for BD or another specialized field like law, human capital, finance, or logistics, confirm that the system is indeed domain-aware and that you understand the associated rewards, costs, and risks.
GenAI platforms today use one or more of the techniques described above to enhance their ability to generate highquality content in specific domains. Each method—prompt engineering, retrieval augmented generation, contextual embedding, and LLM fine-tuning—contributes to creating a tailored, precise, and comprehensive draft proposal response. This saves time and effort and also increases the likelihood of a successful bid by ensuring the proposal clearly communicates the bidder’s commitment to delivering exceptional value.
ABOUT LOHFELD CONSULTING
We are the premier capture and proposal services consulting firm focused exclusively on government markets—with a practical, cost-effective, and efficient approach to winning new business.
Since 2003, we have provided capture and proposal expertise and professional training services to more than 2,500 companies. Our work spans multiple industries, making Lohfeld Consulting a preferred partner for top contractors.

CONTACT US
Beth Wingate, CEO
703.638.2433 | BWingate@LohfeldConsulting.com
Bruce Feldman, Vice President, Artificial Intelligence 703.568.5462 | BFeldman@LohfeldConsulting.com
Bob Lohfeld, Chairman Emeritus 410.336.6264 | RLohfeld@LohfeldConsulting.com
LinkedIn: Lohfeld-Consulting-Group
YouTube: @Lohfeld_Consulting
www.LohfeldConsulting.com
Improving Mission Outcomes with a DataOps Approach to Document Processing
Despite digitization efforts, usage of paperbased documents continues to grow and paper forms cost the federal government $3.78 billion annually, according to a 2022 U.S. Chamber of Commerce report.
As federal agencies work toward improving customer experience and business processes for those they serve, they need Intelligent Document Processing (IDP) to derive insights and analysis from digitized documents so taxpayers can expect faster resolution to disputes, medical beneficiaries can enroll for benefits faster, and military commanders can access flight plans to make mission-critical decisions more quickly.
IDP involves a rich blend of cloud-based tools and business processes, including optical character recognition (OCR), artificial intelligence (AI) and machine learning (ML) algorithms, to automate the processing of complex documents in variable formats. Unlike traditional OCR solutions, IDP can not only recognize and extract text from documents, but it can also understand the context and meaning of the information with precise end-to-end workflow management.
AI is uniquely positioned to enhance these digitization efforts by enabling the processing of a wider range of documents, including electronic documents – such as emails, healthcare records, military guidance, tax documents, invoices, legal documents, and more – while improving quality and accuracy and ensuring higher throughput than was previously possible.
AI-powered IDP can extract data from documents 1,000 times faster than a human and reduce data entry processing time to 75 seconds and reduce costs by 80%, saving federal agencies hundreds
of thousands of work hours, resulting in direct cost-savings to American taxpayers.
This AI-powered digitization provides tremendous efficiency benefit, but at a cost. As the volume of digitized documents grows, managing the data is critical to ensuring continued, efficient access, as well as storage and processing cost management in the cloud.
A DataOps approach to IDP can also address potential challenges around accurately extracting data from documents and interpreting that data correctly. DataOps is an integrated, processoriented approach to managing data workflows that mimics agile software development processes (i.e., DevOps) and encompasses an organization’s technical and business process needs by accounting for workflows, technological tools, cultural norms, and varying levels of expertise.

Efficient data operations (DataOps) combined with AI improves public services and customer engagement by ensuring rapid accessibility of quality information. This allows federal agencies to process millions of documents a day and drive faster resolution times for medical benefits claims, customer service calls, tax returns, military deployment preparation, and more, thus converging customer experience goals and data management requirements.
Elements of Intelligent Document Processing
Data Ingestion
IDP starts with scanning and digitizing documents via optical character recognition (OCR) tools, then transferring content into a data management system. An advanced IDP workflow will include AI to ensure information is complete and accurate, even for documents with poor legibility, different structures and formats, or handwritten documentation — all with as little human intervention possible. Real-time data ingestion for analytical or transactional processing helps businesses make timely operational decisions critical to the success of the organization, all while the data is still current. Transactional and operational data contain valuable insights, driving informed and appropriate actions.
Document Classification
Through machine learning and natural language processing (NLP), an IDP workflow will classify scanned documents into useful categories (i.e., ID cards, tax documents, patient or health records, military flight plans or active-duty reports) so they can be found and retrieved more easily. AI can also improve the accuracy of data classification, increase team efficiency, reduce false-positive alerts, and better prevent data loss.
Data Extraction
Once the document has been digitized and classified, key information is extracted which

will allow the document to be indexed and searchable, enabling efficiency so only relevant documents are searched and retrieved. Data extraction allows companies to migrate data from outside sources into their own databases, improving data collection.
Data Validation and Feedback
This is the stage in which the extracted data is validated against internal and external criteria to ensure accuracy, correct labeling, appropriate data cleansing, and more. Most IDP workflows require a human-in-the-loop at this stage to ensure extracted data is cleansed, labeled, and structured correctly. In advanced workflows, humans can train AI algorithms to enrich and validate data, thus minimizing the long, costly process of manual data validation.
Data Integration
Validated data is then incorporated into business processes to meet mission needs. Advanced IDP workflows can also use AI to enrich validated data by augmenting the extracted data with additional
details from internal or external sources, which provides additional context for decision-makers to improve its quality, accuracy, and value. This is critical for improving customer experience, omnichannel engagement, and improving mission outcomes. As organizations grow, they often find themselves working with different types of data in separate systems. Validated, integrated data allows for simplified sharing and more accurate, precise data-driven decisionmaking.
How DataOps Improves Intelligent Document Processing
IDP workflows are complex and nuanced. When federal agencies need to auto-scale operations during surge periods (i.e., tax season, open enrollment, humanitarian crises, natural disasters, and conflict overseas), managing IDP pipelines efficiently and effectively poses technical and business process challenges.
IDP workflows are comprised of a suite of tools and technologies each with their own

Maximus offers critical, proven capabilities and expertise to manage IDP workflows through DataOps and AI:
• Data curation, preparation, and orchestration to ingest, extract, validate, integrate and manage data efficiently and effectively.
• Data governance to ensure data quality and adherence to data standards and strategy.
• Continuous integration and continuous deployment (CI/CD) to streamline and optimize workflows.
• Data monitoring and observability to ensure security and adherence to data standards, regulatory compliance, and responsible use metrics.
• JAB-authorized Platform-as-a-Service (PaaS), Software-as-a-Service (SaaS), and Infrastructure-as-a-Service (IaaS) to manage data and IDP workflows in secure, scalable environments.
• Robotic process automation (RPA) to automate manual processes for streamlined, optimized workflows.
• A ModelOps approach to AI and advanced analytics to ensure responsible use, consistent data and AI standards, shareability, scalability, regulatory compliance, and continuous improvement.
requirements and limitations, so it is important to incorporate a DataOps approach and domainbased methodologies and practices to ensure responsible use. Otherwise, the tools used alone could potentially produce unsatisfactory or inaccurate data insights. For federal agencies seeking quick, accurate document processing for meaningful data insights and analysis at the speed of mission need and relevancy, risking unsatisfactory or inaccurate results is not an option.
For example, if a traditional OCR tool scans a medical form left-to-right, it may not extract all the data in the document if it doesn’t understand how to pick up data in the different boxes and sections of the form. The human-in-the-loop will need to monitor and adjust the tool to ensure all data is extracted, and in the right order.
IDP requires a deep understanding of the mission, data workflows, data collected, methodology, and data usage. Unless the humanin-the-loop understands how to compensate for OCR mistakes or how to train the OCR tool with AI to improve scanning and data extraction, the data collected may be incomplete or yield a poor-quality scan.
Even when AI tools are used to improve data classification, extraction, and validation, data insights may be inaccurate or poor quality
without sufficient context. An understanding of how the model was trained, the training data, and the metrics it reports are crucial for using AI tools effectively, which means federal agencies need teams with significant data and AI expertise. For example, AI algorithms may calculate a confidence score to gauge data validation after extracting that data from scanned documents in an IDP pipeline. The average user might assume a 99% confidence score means the data is most likely accurate, this is not always the case. Because confidence scores are based on the AI algorithm’s training data, extracted data with a 99% confidence score might be inaccurate or interpreted incorrectly if the training data is poor.
Why Workflow Management is Critical to Success
DataOps, in essence, is end-to-end workflow management for data pipelines, which is critical for federal agencies to derive meaningful, actionable data insights at the speed of mission relevancy. To employ a DataOps approach to IDP, federal agencies need industry partners who understand both operations and technology to scale operations rapidly while supporting mission needs.
AI for Federal Market Intel and Proposal Development
In today’s increasingly competitive federal marketplace, government contractors are under more pressure than ever to find, qualify, and win opportunities faster and more efficiently. The traditional lifecycle — from identifying a viable opportunity to delivering a compliant, compelling proposal — is often fragmented, labor-intensive, and riddled with handoff delays. But AI is changing that. Emerging technologies, particularly AI agents and proposal automation tools, are bridging the gap between business development (BD) and proposal teams. These systems create a seamless, intelligent workflow from discovery to delivery, transforming how contractors pursue federal work and dramatically improving the probability of win (pWin).
One of the most powerful advances in this space is AI for opportunity identification. Rather than relying on manual searches across disparate platforms like SAM.gov or agency forecast sites, AI systems can now scan, filter, and prioritize opportunities based on tailored criteria — such as NAICS codes, past performance alignment, socioeconomic set-asides, and even procurement trends specific to a target agency. Some platforms are even leveraging machine learning models trained on award history and incumbent data to surface “hidden gems” that might be overlooked through keyword-based searches alone. This kind of intelligent market intelligence not only accelerates pipeline development but also significantly improves targeting accuracy, giving BD teams more time to engage stakeholders and refine capture strategies.
But the real breakthrough is what happens next. In many organizations, the transition from BD to proposal development is where momentum is lost — intel gets stuck in spreadsheets or buried in SharePoint folders. AI agents are closing that gap. Today’s AI-powered proposal systems can ingest opportunity data directly from your pipeline, parse solicitation documents, extract compliance requirements, and kick off content generation in near real time. Instead of restarting from zero, proposal managers receive a pre-structured outline, populated with draft content, tailored win themes, and compliance matrices — all informed by the same intel gathered during the BD phase. This workflow continuity not only eliminates redundant work but also ensures strategic alignment from initial pursuit to final submission.
The benefits of AI proposal tools extend far beyond speed. These platforms are designed to enforce compliance rigor while reducing human error, which is critical in high-stakes bids where a missed instruction can disqualify an otherwise strong proposal. AI-driven systems can instantly flag missing responses, enforce section word counts, auto-fill past performance data, and even translate complex technical language into persuasive narrative. Many tools also include built-in review workflows and version control, helping teams stay organized and audit-ready throughout the proposal process. The result is higher-quality outputs, shorter turnaround times, and fewer late-night scrambles before deadlines.
At a strategic level, government contracting lifecycle automation offers a clear competitive advantage. Organizations that leverage AI not only respond faster — they respond smarter. By automating routine, time-consuming tasks, contractors can reallocate talent toward higher-value activities like pricing strategy, solution development, and customer engagement. This shift is particularly important in today’s environment, where shrinking proposal timelines and increased procurement complexity demand agility and precision. Moreover, the insights generated by AI — such as win-loss analysis, evaluator preferences, and keyword frequency — can feed back into future pursuits, continually sharpening a firm’s edge.
As we head into a new era of GovCon, the integration of AI across the opportunity-to-award continuum is no longer optional — it’s a strategic imperative. From generative AI tools that help write compelling responses, to autonomous agents that handle document analysis and compliance validation, contractors now have access to a 24/7 virtual proposal team. The next frontier will be systems that not only support human teams but act as decision-making partners, helping identify teaming partners, assess pricing competitiveness, and even predict protest risk based on past data.
Ultimately, the rise of AI in government contracting isn’t about replacing people — it’s about empowering them. By bridging the gap between market intelligence and proposal execution, AI helps BD and proposal professionals focus on what they do best: crafting solutions, building relationships, and winning business. Contractors who embrace this shift are already seeing measurable improvements in their win rates, proposal quality, and operational efficiency. Those who wait risk being left behind in an industry that’s rapidly redefining what “competitive” looks like.
More than 4,000 project-driven organizations depend on Unanet to turn their information into actionable insights, drive better decision-making, and accelerate business growth. To learn more about Unanet’s ERP and CRM solutions, visit unanet.com.
First Principle Thinking in Federal Contracting

US Federal Contractor Registration A Restructuring Approach to Subcontractor Management
First Principle Thinking and Subcontractors
It would be an understatement to say that Trump’s second administration is moving at a different pace than the previous. There are many viewpoints on how these changes are being communicated and executed. We can focus on the basic tenets being applied and consider how to use this time of uncertainty to improve our organizations.
One way to do this is by applying first principle thinking, a problemsolving method embraced by both the Trump administration and Elon Musk, who leads the Department of Government Efficiency (DOGE). This approach offers insight into shifts in federal contracting while providing a framework for guiding teams through the next four years.
This is a politically neutral thought exercise with a goal of applying a wellstudied way of thinking to challenge our current methods.

“In every systematic inquiry where there are first principles, or causes, or elements, knowledge and science result from acquiring knowledge of these.”
“In every systematic inquiry where there are first principles, or causes, or elements, knowledge and science result from acquiring knowledge of these.”
– Aristotle
– Aristotle
First Principle Thinking and DOGE
As we apply first principle thinking in our organizations, we look back to Aristotle, who described a first principle as “the first basis from which a thing is known.” This method breaks a process down to its simplest form, questions every assumption, and rebuilds from the ground up. Innovators like Elon Musk have widely adopted this approach, applying it to Tesla, SpaceX, and more recently, the Department of Government Efficiency (DOGE).
Elon’s approach to first principle thinking has been documented in his biography and interviews, outlining the steps he follows to rethink complex challenges.
Step One: Question Every Requirement
“Each should come with the name of the person who made it. You should never accept that a requirement came from a department, such as ‘the legal department’ or ‘the safety department.’ You need to know the name of the real person who made that requirement. Then you should question it, no matter how smart that person is.”
Step Two: Delete
“Delete any part or process you can. You may have to add them back later. In fact, if you do not end up adding back at least 10% of them, then you didn’t delete enough.”
Step Three: Simplify and Optimize
“This should come after step two. A common mistake is to simplify and optimize a part or a process that should not exist.”
Step Four: Accelerate Cycle Time
“Every process can be speeded up. But only do this after you have followed the first three steps. In the Tesla factory, I mistakenly spent a lot of time accelerating processes that I later realized should have been deleted.”
Step Five: Automate
“That comes last. The big mistake in Nevada and at Fremont was that I began by trying to automate every step. We should have waited until all the requirements had been questioned, parts and processes deleted, and the bugs were shaken out.”
SpaceX is one of the largest federal contractors, and first principle thinking has helped cut costs by enabling reusable rocket launches while making the company more competitive on bids. This approach is a winwin for both taxpayers and the company. Now, let’s apply the same method to a large prime contractor as they rethink a key aspect of their business—their use of subcontractors to facilitate a contract.
Step One: Question Every Subcontractor Relationship
Let’s view this as a very positive exercise because it could mean the difference between winning and losing a contract, which puts more contractor agreements at risk than the exercise itself. There are many ways to look at your contractors and start questioning their involvement.
■ Where is your contractor list kept, and who keeps it organized?
■ Are the capabilities of each subcontractor documented?
■ Who onboarded the contractor and for what purpose?
■ Is the contractor vetted by the SAM registration process?
■ Is the subcontractor plan referenced?
■ Are the bid responses reflecting the small business and set-aside qualifications of your subcontractors?
It is no secret that subcontractors are carried forward year after year by a manager or previous contractor who is no longer accountable to the organization. In many cases, no one has questioned their continued involvement, resulting in blind assumptions about their necessity. This presents a significant opportunity for all prime contractors utilizing subcontractors.
Step Two: Right-Size or Top-Grade Your Subcontractors
Once you evaluate your subcontractors against the requirements of current or future contracts, it’s time to remove unnecessary relationships or upgrade to more relevant subcontractors—ones that a prime contractor employee has vetted, onboarded, and actively manages. There is too much at stake to rely on outdated subcontractor relationships as the reason to keep them on the team. You may end up cutting more than your team is comfortable with, but remember Elon’s 10% addback advice.
Step Three: Simplify and Optimize Your Subcontractor Data
Once you have removed excess subcontractor relationships, shift your focus to the quality and effectiveness of the remaining partnerships. Ensure you have a streamlined system for tracking key data points for each subcontractor, including:
■ Regulatory compliance and verification
■ Accurate and timely payments
■ Capabilities and past performance
■ Labor rates and cost analysis
■ Key performance indicators (KPIs) tracking
■ SKUs and inventory management
■ Subcontractor compliance and oversight
Step Four: Push Your Subcontractors
Now that your team is in place, it’s time to accelerate the job that needs to be accomplished. Exceeding timeline expectations while maintaining quality will set your organization apart in an industry often riddled with delays and cost overruns. If you’ve onboarded the right team, instilling confidence in their ability to meet tight deadlines will pay dividends when the project is complete.
Step Five: Automate your Subcontractor Intelligence
Once you have thoroughly reviewed and right-sized your team, collected key data points, and pushed them to accomplish the mission, the next step is automation. It’s time to streamline how subcontractors are organized and how key data points are communicated up the leadership chain of the prime contractor. Leadership should never be in the dark about the status and effectiveness of the subcontractor team executing on the frontlines.
At USFCR, we help prime contractors organize and onboard subcontractors to accomplish their mission and stand out from the competition.


ERIC KNELLINGER President/CEO
Eric Knellinger is the President of US Federal Contractor Registration, the world’s largest and most respected Federal Government registration firm. Eric has over 30 years of experience in government acquisition, advertising, marketing, sales, and business development.
JESSICA SUMMERS Chief Operating Officer

Jessica Summers is a seasoned professional in government contracting with 14 years of experience. She is a trusted advisor for navigating federal procurement and specializes in registration, certification, bid proposal development, contract negotiation, and compliance management. She works with businesses of all sizes across industries to provide tailored guidance throughout the contracting process.
Email: jsummers@usfcr.com | Phone: (877) 252-2700 x730

CHRISTIE JACKSON VP of Regulation and Compliance
Christie Jackson is a highly experienced and successful professional with over 13 years of experience in the Federal Government Contracting & Procurement field. She is highly skilled in managing complex projects and initiatives across all phases of the contracting process.
Email: cjackson@usfcr.com | Phone: (877) 252-2700 x1758
Sources
Isaacson, W. (2023). Elon Musk. Simon & Schuster.

