Join delegates from Shell, Vår Energi, OMV, Equinor and more at Kongsberg
Digital’s annual tech event. Explore the trends that are shaping the future of work in oil and gas, chemicals, offshore wind and the grid.
TOPICS
Agentic AI
Assets of the future
Smart manufacturing
Future-proof operations
Net-zero and CCUS
Workflow automation
Operational agility
Digital twin technology
Follow the QR link to learn register for free:
mark.venables@cavendishgroup.co.uk
The intelligence beneath the surface
Digital twins in oil and gas are no longer an aspiration. They are an imperative. For an industry that has always grappled with volatility, complexity and razor-thin margins, the ability to model assets and operations with living, learning digital counterparts is beginning to redraw the boundaries of what is possible.
From exploration through to decommissioning, the value of an accurate, high-fidelity digital twin is now beyond dispute. Operators are leveraging these models not just to mirror physical infrastructure but to interrogate it, stress-test it, and improve it in real time. The result is a more responsive, resilient and intelligent approach to decision-making, one that replaces static reports and gut instinct with predictive insight and dynamic simulation.
The pace of progress is undeniable. Mature fields once reliant on legacy data and ageing systems are being reawakened by digital twins trained on decades of operational data. Remote sites are gaining new visibility. Maintenance schedules are moving from reactive to predictive. And design cycles, once measured in years, are being compressed through integrated twin environments that bring engineers, planners and AI systems into a single collaborative space.
But as with any significant shift, the transition is not uniform. Some operators are pushing ahead with enterprise-wide twin strategies. Others remain cautious, constrained by siloed data, fragmented technology stacks and cultural inertia. The real challenge now is not proving the value of digital twins, that case has been made. It is scaling them, securing them, and integrating them across the lifecycle in a way that is coherent, cost-effective and measurable.
That is why this publication exists. Future Digital Twin & AI is not here to repeat hype or echo well-rehearsed case studies. It is here to ask difficult questions, track the hard-won lessons, and spotlight those who are turning digital theory into operational reality. This sector has always evolved through cycles of disruption and adaptation. We believe digital twins
represent the next defining chapter in that evolution, a convergence of domain expertise, data intelligence and virtual simulation that will shape how oil and gas is produced for decades to come.
This issue brings together perspectives from operators, technology leaders and researchers at the sharp end of the transformation. We examine what it takes to build a scalable twin, how generative AI is enhancing simulation fidelity, and where the greatest return on investment is being realised. If we are to navigate a future defined by uncertainty, the ability to see clearly – through a trusted, digital lens – has never been more important.
Mark Venables Editor
Future Digital Twin & AI
At Wood, we are designing the future by decoding digital to transform industry.
Our Digital Asset and DataOps solution applies decades of operating experience and digital expertise to deliver the building blocks of digital transformation.
As your Main Digital Contractor, we drive digitalization across capital projects by setting the lifecycle strategy and building the digital asset. We then embed technology in operations to advance ways of working, enable operational excellence and deliver lasting value.
See what’s possible: woodplc.com/digital
The oil and gas industry must operationalise its data to stay competitive in a low-carbon future. The integrated digital twin offers the clearest path forward for emissions reduction, efficiency, and agility across asset lifecycles.
AI-powered digital twins are reshaping how assets are designed, operated, and optimised. They fuse real-time intelligence with predictive control to reduce risk and raise performance. The result is not just greater efficiency, but an entirely new model of asset management built on simulation, insight, and adaptation.
Physics-informed neural operators bring speed and intelligence to subsurface modelling. AI transforms how companies simulate complex underground systems, enabling faster decision-making for oil production and carbon storage.
Data handover failures in oil and gas projects are still far too common, often undermining the performance of even the most sophisticated digital assets. By appointing a Main Digital Contractor, operators can embed data integrity from the outset and ensure the transition from project to operations is seamless, efficient and value driven.
Digital Trust: Qualification and assurance of digital twins and AI technologies
Why the energy industry needs to verify digital twins and AI technologies to remain trustworthy over time
Reshaping the future of work in energy
Real-world applications and use cases have emerged beyond the buzz and hype of AI. Natural language processing (NLP), large language models (LLM), hybrid machine learning (ML), generative and agentic AI, all represent a litany of transformational technologies
Integrating AI and accelerated computing is redefining simulation in oil and gas from complex subsurface modelling to real-time methane leak detection. Engineers are no longer constrained by hardware limits or slow workflows; cloud-native tools and GPU-powered platforms enable a new era of productivity and precision.
Teaching machines to think like Shell engineers
Domain adaptation for large language models is becoming critical in oil and gas as companies look to unlock decades of technical knowledge. By tuning AI models to their unique data and language, engineers can move beyond search to contextual reasoning and decisionmaking.
Final Word: Digital reservoirs and intelligent rigs
Mark Venables, editor of Future Digital Twin explains how AI’s growing grip on oil and gas operations is not just optimisation – it is reinvention
West Africa becomes a new anchor for global LNG supply
bp has exported its first cargo of liquefied natural gas from the Greater Tortue Ahmeyim (GTA) project, a vast offshore development straddling the maritime boundary between Mauritania and Senegal. This shipment, loaded from the project’s floating LNG facility 10 kilometres off the West African coast, represents a strategic shift in global energy supply geography, as emerging producers step forward to meet rising demand for flexible and lower-carbon fuels. The milestone follows the successful production of first gas earlier this year and marks the third upstream start-up for bp in 2024. It is the first of ten major oil and gas developments the company aims to bring online by the end of 2027, part of a broader strategy to grow upstream capacity even as global energy systems begin to decarbonise. The GTA project is one of the deepest offshore gas developments in Africa, with reservoirs lying at depths of up to 2,850 metres. Once fully operational, Phase 1 of the project is expected to supply 2.4 million tonnes of LNG per year to international markets. While some of the output will eventually be allocated for domestic use in Mauritania and Senegal, the immediate focus is to establish the region as a reliable contributor to global energy security.
A new supply route in a shifting market
The timing of the first cargo from GTA is not incidental. With traditional supply routes under strain and geopolitical volatility affecting flows
from key producers, the emergence of new LNG hubs is critical. West Africa, with its strategic location and untapped reserves, offers a route that is less exposed to chokepoints and political instability.
Gordon Birrell, bp’s executive vice president for production and operations, called the milestone “a significant new supply for global energy markets.” He added, “Starting exports from GTA Phase 1 is an important step for bp and our oil and gas business as we celebrate the creation of a new production hub within our global portfolio.”
The project’s infrastructure reflects a deliberate strategy to reduce the complexity of shore-based facilities. Natural gas is processed on a floating production storage and offloading (FPSO) vessel around 40 kilometres offshore, where impurities and water are removed. It is then transferred to a separate floating LNG platform, cryogenically cooled, stored and ultimately exported. This offshore model reduces environmental and logistical pressures on coastal communities while increasing resilience and scalability
Building capacity as well as infrastructure
While the export of gas may dominate headlines, the longer-term story lies in the industrial ecosystem taking shape around the project. Since entering Mauritania and Senegal in 2017, bp has engaged nearly 300 local companies and generated over 3,000 jobs. Its apprentice training programme is preparing a cohort of 47 technicians to operate the complex offshore infrastructure, with a broader social investment strategy spanning education, health, micro-finance and women’s cooperatives.
Dave Campbell, bp’s senior vice president for Mauritania and Senegal, described the moment as a “very proud day” for both countries. “Throughout the development of this project, we have built strong relationships with the project’s host governments, local communities and our partners, and we look forward to strengthening these in years to come as we continue ongoing operations,” he said. Although artificial intelligence and digital transformation are not explicitly positioned at the centre of GTA’s operations, the complexity of the project implies a high degree of digital orchestration. Coordinating deepwater extraction, offshore processing, cryogenic liquefaction and international shipment requires advanced monitoring, simulation and control systems—foundations increasingly supported by machine learning and automated diagnostics in similar offshore ventures.
As global LNG demand grows amid the push for lower-emission fuels and greater energy security, the successful launch of GTA marks a notable diversification of supply. It places Mauritania and Senegal firmly on the map as emerging energy producers and underscores the growing role of non-traditional regions in shaping the future of global gas markets. For bp, it signals not just a geographic expansion, but a long-term bet on infrastructure designed to meet demand with scale, resilience and flexibility.
AI takes control of the oilfield as fracturing goes fully autonomous
In a move that signals the accelerating adoption of artificial intelligence in the energy sector, Halliburton and Coterra Energy have jointly launched what they describe as North America’s first fully autonomous hydraulic fracturing system. The technology, dubbed Octiv Auto Frac, eliminates the need for human intervention during stage delivery in the fracturing process, automating design execution with the push of a button.
Developed as part of Halliburton’s ZEUS intelligent fracturing platform, the Octiv Auto Frac service introduces AI-driven automation to a traditionally manual and reactive domain. The implications for efficiency, safety and consistency in shale operations are significant, particularly in complex basins such as the Permian, where minor errors in execution can translate into millions in lost output.
Before the introduction of this technology, engineers were required to make real-time decisions throughout each stage of a hydraulic fracturing operation. With Octiv Auto Frac, those decisions, ranging from pressure adjustments to chemical sequencing, are now made automatically, based on pre-programmed models and continuous data inputs. The system not only maintains tight control over design parameters but learns from prior stages to refine performance on the fly.
The rise of intelligent completions
Coterra is the first operator to fully integrate and deploy the technology across all its completion programmes managed by Halliburton. Initial rollouts in the Permian Basin yielded a 17 percent increase in stage efficiency, leading to broader adoption across Coterra’s operations.
“The deployment of intelligent automation for hydraulic fracturing helps us execute stages consistently and provides us with more autonomy and control over the completion process,” said Tom Jorden, chief executive of Coterra.
The pairing of real-time automation with electric pumping and fracture monitoring under the ZEUS platform offers operators a new level of visibility and command over subsurface conditions. This integration also paves the way for closed-loop optimisation—where feedback from downhole sensors and surface systems informs continuous adjustments without requiring onthe-fly human input.
Halliburton’s vice president of Production Enhancement, Shawn Stasiuk, framed the development as a generational shift. “Octiv Auto Frac changes the game of completion performance. The service ensures that automation delivers consistent fracture execution every stage while giving our customers the control they demand over their assets,” he said.
Digital transformation reaches the wellhead
The oil and gas sector has historically lagged behind others in digital maturity, but technologies like Octiv Auto Frac suggest a growing willingness to embed automation and AI at the operational core. With mounting pressure to improve efficiency, reduce emissions and enhance workforce safety, autonomous systems offer a credible path forward.
Hydraulic fracturing, in particular, has been a focal point of digital experimentation, given its repetitive structure, complex logistics, and data-rich environment. With Octiv Auto Frac, Halliburton has shifted from providing digital insights to enabling direct digital execution—effectively placing machine intelligence in the driver’s seat of field operations.
The wider implications extend beyond single-well performance. Consistent execution across assets improves field-wide recovery rates and simplifies post-operation analysis, while reducing variability in environmental impact. More broadly, such autonomy could allow a leaner, more agile approach to development planning and execution, especially in tight labour markets and remote regions.
As the energy industry wrestles with its own version of digital transformation, projects like this suggest that the change will not arrive incrementally. Instead, it will come in leaps—marked by step changes in how decisions are made, operations are conducted and value is created. With the Octiv Auto Frac system now in active use, the age of AI-led oilfield operations is no longer theoretical. It is already underway, one autonomous stage at a time.
Digital asset management moves beyond prediction to execution with hybrid AI integration
Baker Hughes has launched the latest iteration of its Cordant Asset Performance Management (APM) platform, signalling a step change in how energy and industrial companies manage operational risk, maintenance costs, and sustainability performance. The update, unveiled in Florence, Italy, advances the digital transformation of asset-heavy industries with deeper integration of hybrid artificial intelligence, physics-based modelling, and decision automation.
Since its launch in 2023, Cordant APM has gained traction across national oil companies, LNG producers, petrochemical giants and fertiliser manufacturers. The past year has seen a five-fold increase in deployments, reflecting a growing appetite for systems that can bridge the gap between asset health diagnostics and real-time operational decision-making.
The latest upgrade introduces enhanced capabilities in risk management, resource productivity and sustainability. It promises not just to inform decisions but to act as an embedded layer of intelligence within plant operations—surfacing recommendations, prioritising interventions, and triggering maintenance activities without human intervention.
AI meets operations at scale
What sets this version of Cordant APM apart is its blend of data science and domain expertise. Hybrid AI models now work in concert with engineering-based simulations, enabling operators to analyse anomalies, estimate degradation, and simulate corrective actions across thousands of assets. This significantly reduces manual effort, while aligning recommendations with execution in existing maintenance systems.
“The integration of Cordant Asset Health with our iCenter platform is a prime example of this,” said Aravind Yarlagadda, senior vice president of Industrial Solutions at Baker Hughes. “We are not just improving visibility into asset condition, but also closing the loop from insight to action—connecting diagnosis to scheduling, intervention and feedback.”
The benefits are already visible. A global LNG operator using Cordant and iCenter reported a 12-hour increase in production availability, equating to $10 million in added value. A fertiliser company saw between two and 15 per cent improvement in equipment availability. In another case, a major petrochemical firm is targeting a $50 million reduction in annual maintenance costs through Cordant’s deployment.
Closing the gap between efficiency and sustainability
Asset-heavy sectors are under growing pressure to balance uptime with environmental and financial constraints. Cordant’s new capabilities reflect this dual mandate.
Opportunity analysis tools now allow operators to identify underperforming systems based on energy efficiency, while integrated risk registers ensure that decision-making remains aligned with wider corporate ESG objectives.
The platform also incorporates new tools for financial visibility, with dashboards that integrate maintenance spend across asset types and link budgetary decisions with operational risk. This, combined with advanced notification workflows, is designed to support cross-functional teams in making faster, better-informed decisions—whether to replace a component, extend its use, or investigate anomalies further.
What emerges is a blueprint for digital asset management that transcends predictive maintenance. Rather than treating analytics as an advisory layer, Baker Hughes is pushing toward a model where AI plays an active operational role—prioritising tasks, initiating responses and quantifying the value delivered.
With asset performance now a lever for both competitiveness and carbon reduction, the deployment of intelligent systems like Cordant APM points to a broader shift in industrial strategy. Technology is no longer limited to avoiding breakdowns; it is being recast as a core enabler of business performance.
In the process, the boundary between IT and operations is becoming increasingly blurred. Cordant does not just interpret data—it shapes workflows, reallocates labour, and drives investment priorities. For the energy and manufacturing sectors, this may prove the most consequential form of AI adoption yet—not because it is visible, but because it is embedded.
Offshore energy enters a new phase of intelligence and integration
The 2025 Offshore Technology Conference (OTC) concluded this week in Houston with a clear signal to the global energy sector: the offshore industry is not only adapting to a volatile energy landscape but actively reshaping it through digital innovation, high-pressure technologies and bold cross-border collaboration. Drawing attendees from more than 100 countries, this year’s event showcased a maturing offshore sector increasingly defined by automation, data integration and performance-based engineering.
While the event celebrated individual and institutional achievements in deepwater engineering, it was the convergence of digital intelligence and physical infrastructure that emerged as the dominant theme across keynote sessions, technical panels and the exhibition floor. With more than 1,000 companies in attendance and 360 technical presentations delivered, the 2025 edition reaffirmed OTC’s role as the industry’s most important platform for surfacing not just technologies but the new logic of offshore operations.
Baker Hughes, SLB, Fugro, and Bosch Rexroth were among the winners of the 2025 Spotlight on New Technology Awards, recognised for tools that automate complex subsea tasks, enhance completions data fidelity, and optimise field production in real time. These systems are increasingly guided by hybrid AI models, combining physics-based simulations with machine learning to identify anomalies, predict outcomes and execute decisions with minimal human intervention. AI and autonomy reshape the offshore toolkit
Nowhere was this shift more visible than in the growing use of intelligent automation for well completion, fluid testing and asset integrity. SLB’s AutoProfiler for inline fluid analysis, Baker Hughes’ Leucipa ESP optimizer, and DeepOcean’s Driver-Less Tie In Tool represent an emerging class of offshore tools that do not merely gather data but act on it. These systems are being designed to learn from each cycle, improving execution speed and reducing risk at the edge of operational tolerance.
The recognition of Chevron’s Anchor project with the Distinguished Achievement Award for Institutional Excellence underlined how digital and engineering breakthroughs are converging in highpressure, high-risk environments. Located 140 miles off the Louisiana coast, Anchor has become a benchmark for deepwater development, combining new semi-submersible production platforms with advanced flow assurance and completion systems. The project’s ability to operate at pressures exceeding 20,000 psi, while maintaining a lower-carbon footprint, underscores a wider trend: the move toward efficiency-led resilience rather than scale for its own sake.
In his keynote, Rystad CEO Jarand Rystad noted that future offshore growth will be less about headline capacity and more about “operational leverage through data.” That sentiment was echoed
across sessions focused on integrating asset performance platforms with corporate ESG strategies, reflecting the growing expectation that offshore infrastructure must now deliver visibility, adaptability and verifiable impact.
From frontier engineering to knowledge transfer
OTC 2025 also spotlighted human ingenuity and continuity in an increasingly automated landscape. Jose Formigli, recipient of the Distinguished Achievement Award for Individuals, was celebrated for his decades of technical leadership in Brazilian deepwater projects, and for bridging the gap between engineering rigour and crosssector collaboration. His role in pioneering subsea developments at Petrobras laid the groundwork for many of the technologies now being refined with digital overlays.
Likewise, Dr Arun Duggal, winner of the Heritage Award, was honoured for his contributions to mooring system innovation and his commitment to mentoring young engineers. Across three decades, Duggal’s work on turret and yoke moorings has enabled safer station-keeping in some of the world’s harshest offshore environments, from West Africa to Australia.
Even as automation advances, OTC reinforced that human knowledge remains at the centre of offshore energy’s evolution—particularly in developing markets and complex frontier geologies. Delegations from Argentina, Guyana, Uruguay and several African nations participated in this year’s “Around the World Series,” illustrating how regional expertise and sovereign strategies are increasingly shaping global energy flows. From high-pressure subsea innovation to AI-powered completions, OTC 2025 revealed an industry in transition, not just technologically but structurally. The offshore sector’s next chapter will not be written by hydrocarbons alone, but by the speed, intelligence and resilience with which it is able to deploy, adapt and learn. This year’s conference made it clear: the most valuable assets offshore are no longer just found below the seabed, but in the architecture of insight above it.
VisOps
The world’s first visual operations platform.
Powering visual operations for industrial teams with the ultimate platform to store, contextualize, and collaborate on reality capture data.
Start today!
Plans starting at $300 / month
Unlimited users
Consumption-based pricing
Customize for your team
User-based permissions
How to operationalize reality capture
1) Build the primary visualization layer
Combine your existing visual asset data to build a holistic view of your remote assets in 3D
2) Remotely access your industrial assets
Reduce travel to site, enhance your asset data registry, and unlock the value of your reality data
3) Collaborate in reality
Empower your team to collaborate from anywhere for faster, data-driven decisions
Norway doubles down on Arctic oil to secure longterm energy supply
Norway has brought a major new oil field online in the Barents Sea, marking a significant milestone in the country’s long-term energy strategy. The Johan Castberg field, operated by Equinor and located more than 100 kilometres north of the existing Snøhvit development, began production on 31 March and is expected to remain in operation for the next three decades.
With an estimated 450 to 650 million barrels of recoverable reserves, the field will peak at 220,000 barrels of oil per day and contribute a substantial flow of revenues to the Norwegian state. At a time when Europe continues to grapple with questions of energy security, the development strengthens Norway’s position as a reliable exporter of fossil fuels even as the global energy transition accelerates.
While much of the focus in recent years has turned to renewable energy, projects like Johan Castberg reflect a dual-track approach, maximising the economic value of existing hydrocarbon resources while exploring low-carbon technologies in parallel. For Norway, which has built one of the world’s most stable economies on oil and gas exports, the Barents Sea represents both an industrial challenge and a strategic opportunity.
Engineering ambition in a shifting geography
The scale of the Johan Castberg development is considerable. Built around a floating production, storage and offloading (FPSO) vessel tied to an
extensive subsea network of 30 wells, the project required nearly 79 million working hours to bring to first oil. Twelve wells are already operational—enough to reach plateau production by the second quarter of 2025. Drilling will continue through to 2026, sustaining jobs and supply chains in the far north.
“This is a red-letter day,” said Geir Tungesvik, Equinor’s executive vice president for Projects, Drilling and Procurement. “The Johan Castberg field will contribute crucial energy, value creation, ripple effects and jobs for at least 30 years to come. We expect that this major field development with a price tag of NOK 86 billion will be repaid in less than two years.”
More than 70 per cent of the development’s value has been delivered by Norwegian suppliers, with over 40 per cent stemming from Northern Norway. Once operational, the Norwegian share will rise to 95 per cent, underlining the government’s aim to anchor economic benefits locally. One in three employees aboard the FPSO is based in the region, while the field’s support operations are centred in Hammerfest and Harstad, injecting additional resilience into the local economy.
A new frontier for Arctic production
The Castberg project is only the second oil field to be developed in the Barents Sea and remains Norway’s northernmost production site. It combines the Skrugard, Havis and Drivis discoveries made between 2011 and 2014 and lies in water depths of up to 390 metres. Equinor has already identified opportunities to tie in new finds, with a further 250 to 550 million barrels potentially recoverable through future phases of development.
“Johan Castberg opens a new region for oil recovery and will create more opportunities in the Barents Sea,” said Kjetil Hove, Equinor’s executive vice president for Exploration and Production Norway. “We have already made new discoveries in the area and will keep exploring together with our partners.” While the project has not foregrounded artificial intelligence or digital transformation as differentiators, the complexity of Arctic operations implies significant reliance on advanced digital systems. From remote condition monitoring in harsh weather to subsea data integration and predictive maintenance, production in the Barents Sea increasingly depends on a sophisticated network of analytics and control systems designed to ensure safety and efficiency in isolated environments. With 84 per cent of the field’s revenue flowing to the state through taxes and direct ownership, Johan Castberg is as much a fiscal asset as it is an energy one. As Europe recalibrates its energy security strategies and global demand for oil evolves, Norway’s decision to anchor new production capacity in the Arctic signals its intent to remain a key player for decades to come. The Barents Sea may once have marked the edge of industrial ambition. Now, it marks a new centre.
Artificial intelligence prepares to take the strain in offshore safety oversight
Artificial intelligence may soon become a critical part of offshore safety management, as a new report from DNV outlines the potential for AI systems to analyse complex incident data and reduce risk across oil and gas platforms. Commissioned by the Norwegian Ocean Industry Authority (Havtil), the pilot project focuses on one of the most persistent hazards in offshore operations—dropped objects— and asks whether language models paired with expert-defined ontologies can uncover the kind of causation patterns that human investigators alone might miss.
The report, titled Use of artificial intelligence to reduce danger and risk of accidents due to falling objects, presents an emerging use case for domain-specific AI tools. Unlike off-the-shelf large language models, the system under development is designed to query historical incident reports, inspection findings and damage analyses in ways that are transparent, context-aware and rooted in decades of safety data.
What sets this initiative apart is its grounding in structured domain knowledge. By combining AI with ontologies, a type of knowledge graph built with the help of engineers and safety experts, DNV has built a prototype that can interpret incident data based on its true operational meaning, not just its linguistic surface. The aim is to turn decades of detailed, underused reports into actionable intelligence.
Making decades of safety reports searchable by intent
For Havtil, formerly known as Petroleumstilsynet, the value lies not just in automating data review, but in creating a step-change in how offshore risks are understood. “Our collaboration with DNV on developing an ontology for falling objects is a promising step forward,” said Morten Langøy, Principal Engineer at Havtil. “We plan to continue to expand this together with the industries’ domain experts to create a more comprehensive model for incident causation.”
Instead of manually sifting through hundreds of documents, safety professionals could soon pose direct, intent-based questions: What are the most common causes of falling objects on platform X? Are there links between shift patterns and incident severity? The AI tool can scan records and summarise patterns far beyond the bandwidth of human teams, providing evidence-based answers that can feed directly into operational decision-making.
The interface is designed to understand natural language queries and respond with transparent, explainable logic. Every step of the AI’s reasoning, queries, data points, assumptions—is visible to the user, building trust in its outputs. Unlike typical generative models, which can hallucinate plausible-sounding but false information, the prototype is engineered to remain within the bounds of its source data.
“The report explores a promising approach to leveraging new technology to mitigate risks in the industry, particularly through the use of ontologies combined with AI,” said David R. Watson, Innovation Lead at DNV – Digital Solutions.
A safer offshore sector built on shared insight
Dropped object incidents, while often avoidable, have remained a stubborn cause of injury and equipment loss in offshore operations. The difficulty is not that lessons are not being learned—it is that they are often buried in isolated reports, hard to extract, and harder still to synthesise at scale. AI, when used appropriately, offers a solution to this persistent knowledge bottleneck.
The ambition for the technology goes beyond individual organisations. If AI systems like this one could be applied to anonymised industry-wide datasets, the implications for safety would be substantial. Patterns across geographies, operators, equipment types and weather conditions could be detected early, transforming how both regulators and companies pre-emptively manage risk.
Crucially, the development of such tools is being undertaken with safeguards in mind. The system logs every process it performs and makes every query visible, ensuring traceability and auditability, key requirements for high-trust environments such as offshore safety.
While still at a pilot stage, the project reflects a broader shift in how regulators and operators alike are viewing artificial intelligence. No longer confined to predictive maintenance or seismic analysis, AI is being invited into the governance space, where its role is not to replace judgement, but to inform it. As offshore infrastructure ages, operations grow more complex, and regulatory expectations increase, systems that can scale institutional knowledge may prove as valuable as any new piece of equipment on deck.
Digital twins are rewriting the rules of oil and gas operations
AI-powered digital twins are reshaping how assets are designed, operated, and optimised. They fuse real-time intelligence with predictive control to reduce risk and raise performance. The result is not just greater efficiency but an entirely new model of asset management built on simulation, insight, and adaptation.
The oil and gas sector is no stranger to complexity. The industry balances a precarious mix of volatility, regulation, cost pressure, and environmental scrutiny from offshore platforms to downstream refineries. Yet within that operational tightrope, a decisive shift is underway, driven not by hardware, but by its digital counterpart. The arrival of AI-enabled digital twins has introduced a new kind of visibility that makes sense of noise, compresses decision time, and blurs the boundaries between planning and execution.
From a static replica to an intelligent system
What began as a graphical mirror of physical systems has evolved into something more dynamic. The modern digital twin is no longer a passive reflection but a living simulation, built to learn, adapt, and act. By embedding AI models into the twin, engineers move beyond data aggregation to enable forecasting, anomaly detection, and prescriptive control.
“The digital twin is becoming executable,” Shivdeep Gaagat, Siemens’ Industry Lead for Energy, Simulation and Test Division, explains. “It is not just representing reality but simulating how a system would behave in various possible futures. This unlocks new levels of control and optimisation.” The executable twin enables closed-loop performance management, where the model is not just observing the system but helping to run it. In oil and gas, this includes adjusting process parameters to maintain safety margins, reduce emissions, and extend equipment life.
This intelligence is particularly valuable in process-intensive environments such as upstream production or LNG terminals, where variables shift constantly and operational leeway is narrow. Even when automated, traditional control systems are constrained by what they have seen before. AI changes that equation by learning from complex patterns
across entire fleets and lifecycles.
Engineering certainty into uncertainty
In upstream exploration and production, the margin for error is thin. Reservoir performance, drilling operations, and well integrity rely on a clear understanding of subsurface and surface interactions. Here, digital twins are not just a monitoring tool but an engineering asset in their own right.
One critical use case is well planning. By building an AI-enhanced digital twin of the drilling environment, operators can simulate bit behaviour, downhole pressure, and formation responses under a range of conditions. This reduces non-productive time and improves safety while cutting back on expensive trial-and-error.
“AI makes it possible to generate an ensemble of likely future states, not just a single outcome,” Mark Speyers, Digital Marketing Manager at SymphonyAI Industrial, adds. “This ability to model uncertainty turns the digital twin into a decision-support engine. It allows planners to ask ‘what if’ questions in real time and get meaningful answers.”
In asset integrity, AI-enabled digital twins can monitor stress accumulation and degradation pathways over time. This is critical for ageing infrastructure where replacement is not always economically viable. The twin becomes a continuously evolving risk model, providing the basis for predictive maintenance and life extension strategies grounded in data rather than assumptions.
Bridging the IT-OT divide
Deploying digital twins at scale demands a close integration between operational technology (OT) and enterprise IT systems. This is often the Achilles’ heel of transformation in oil and gas, where proprietary protocols, data silos, and legacy infrastructure remain entrenched.
Digital twins thrive on context. To be useful, they must ingest structured and unstructured data from engineering models, sensors, geospatial inputs, and ERP systems. The challenge is not just technical but organisational, breaking down walls between functions so that the digital twin has a complete and accurate picture.
The most effective implementations do not attempt to unify everything into a single model. Instead, they use a federated approach where discipline-specific models, such as mechanical, electrical, and process, are stitched together through a common framework. AI sits at the centre, harmonising these inputs and identifying inconsistencies or emerging risks across domains.
This convergence is transforming how engineering, operations, and business teams interact. Rather than waiting for a report or dashboard, users can interrogate the digital twin directly,
DIGITAL TWINS
asking questions in natural language or querying simulation scenarios visually. The result is a more intuitive and immediate understanding of complex systems, democratising insights that were once trapped in specialist silos.
Unlocking value across the asset lifecycle
The most compelling advantage of digital twins is that their value does not diminish after commissioning. In fact, it grows. The twin evolves alongside the physical asset, incorporating new data, refining its models, and feeding back into decision-making at every stage of the lifecycle.
At the design phase, digital twins reduce rework by enabling multi-disciplinary coordination and clash detection before physical construction begins. During commissioning, they act as a training and validation platform, helping operators understand control sequences, interlocks, and emergency procedures.
Once in operation, the AI-enhanced twin shifts from a planning tool to a performance advisor. “We can predict process deviations before they occur and recommend interventions specific to the current operating context,” Speyers explains. “This moves us from reactive troubleshooting to proactive performance optimisation.”
During turnaround planning, the digital twin supports scenario analysis and virtual inspections. For brownfield expansions or decommissioning, it becomes a repository of historical decisions and simulations, ensuring that institutional knowledge is not lost.
These capabilities are now extending into emissions and sustainability tracking. With environmental reporting under increasing scrutiny, operators are using digital twins to validate emissions factors, verify flare volumes, and model the impact of operational changes on carbon intensity. This enables more
credible disclosures and better alignment with regulatory and investor expectations.
The infrastructure of intelligence
Despite the promise, the digital twin is not a plug-and-play solution. It requires infrastructure, both technical and organisational, that is often lacking. Data pipelines must be reliable, semantic models must be standardised, and AI models must be traceable and auditable. Perhaps most importantly, the cultural mindset must shift from install and forget to model and evolve.
AI introduces a new kind of dependency. “If the digital twin is making recommendations, it must be explainable,” Gaagat says. “Operators need to understand the ‘why’ behind a suggestion. Otherwise, trust breaks down.” This highlights the need for transparent AI pipelines, where inputs, assumptions, and inference steps are all documented and subject to review.
Cloud platforms and edge computing are also playing a growing role. Edge deployment allows real-time inference close to the asset, reducing latency and bandwidth requirements. The cloud, meanwhile, provides a collaborative backbone for teams working across time zones, functions, and vendors.
Therefore, the future of the digital twin in oil and gas is not a monolith but a mosaic, built from interoperable parts, curated datasets, and continuously learning models. AI is the glue that binds these together, enabling the twin to act, not just represent.
What emerges is a new operating philosophy: one where simulation and reality exist in a constant loop, decisions are informed by possible futures, and engineering precision meets adaptive intelligence. In an industry that has always lived with risk, the AI-powered digital twin offers something rare: clarity.
FieldTwin is where operators and EPCs design, visualize and collaborate in perfect harmony.
Digital twins and the decarbonisation imperative in oil and gas
The oil and gas industry must operationalise its data to stay competitive in a low-carbon future. The integrated digital twin offers the clearest path forward for emissions reduction, efficiency, and agility across asset lifecycles.
Oil and gas companies face a structural shift in their operating environment, which no longer tolerates a fragmented approach to data or a siloed view of emissions. Investors, regulators, and internal stakeholders are aligned on a single demand: operationalise decarbonisation. But the path forward is fraught with technical, economic, and cultural complexities that spreadsheet-driven optimism cannot solve. It requires precision, real-time insight, and, increasingly, digital twins.
These are not the static visual models of yesterday. The integrated digital twin combines live operational and engineering data in a highfidelity, continuously updating environment. It becomes a platform for both visibility and execution, an intelligent decision support system that contextualises carbon, cost, and performance in a single operational view. This is not a digital indulgence for an industry with tight margins and expectations rising. It is infrastructure.
“The challenge of decarbonisation is not simply one of technology,” Craig Harclerode, Industry Principal for Oil and Gas at AVEVA, explains. “It is about integrating data, aligning incentives, and delivering outcomes that satisfy not only financial stakeholders but also the growing list of environmental, social, and regulatory expectations.”
Decarbonisation by design, not reaction
The oil and gas industry contributes roughly nine per cent of global greenhouse gas emissions. Reducing that number is nonnegotiable. But doing so requires more than project-level interventions or bolt-on analytics. It demands a coherent, enterprise-wide digital foundation that allows decision-makers to treat carbon as a managed metric, not a reporting headache.
Harclerode states: “There is no path to real-time decarbonisation without real-time operational and carbon data management. Unless your asset teams and executive stakeholders work from the same contextualised truth source, you are optimising in the dark.”
This alignment becomes particularly critical when companies attempt to model competing variables such as cost, efficiency, and emissions intensity across entire value chains. Whether adjusting a crude slate or configuring a compressor load,
operational intelligence’s value is only realised when insights are timely and actionable. That is the promise of the digital twin.
But deploying a twin in name only solves little. The real breakthrough comes from integrating layers of analytics, streaming, descriptive, and predictive, into a hybrid architecture that bridges on-premise operational systems with cloud-based carbon intelligence. When built this way, the twin becomes more than a digital reflection. It becomes a strategic instrument.
The eight levers of carbon reduction
Harclerode outlines eight operational levers to translate intention into action, each requiring different degrees of digital enablement. These include traditional energy efficiency upgrades, optimisation of carbon and financial value chains, carbon accounting integration, electrification and infrastructure transformation, alternative energy diversification, hydrogen production, carbon byproduct utilisation, and carbon capture and storage (CCUS). Each lever is complex. Together,
they demand orchestration.
Take hydrogen, for example. Green hydrogen promises a clean fuel future, but its production requires real-time optimisation of electrolyser loads, renewable energy inputs, and oxygen byproduct usage. This is not a spreadsheet problem. It is a systems problem. Or consider CCUS. Captured carbon must be purified, stored, and often repurposed. That chain of custody requires verifiable, high-quality data to be tracked from source to sink. It requires a carbon chart of accounts as rigorous and auditable as the financial ledger. “If companies do not integrate carbon accounting into their operational systems,” Harclerode says, “they are unlikely to achieve either compliance or credibility.”
And then there is electrification. Swapping steam drivers for electric assets may reduce on-site emissions but also introduces new loads, voltage transients, and reliability concerns. It is a networked issue that requires synchronisation of physical infrastructure and virtual control systems.
DIGITAL TWIN
DIGITAL TWIN
Digital twins that combine engineering models with live asset data can manage this complexity. Practical upgrades, such as insulation improvements, heat integration, and flaring reductions, also speak to both carbon and cost. These initiatives benefit from consistent data-driven benchmarking and modelling. What distinguishes successful execution in all these areas is not the novelty of the intervention but the availability of a continuously updated data environment to plan, test, deploy, and adapt.
The rise of the decision-ready SME
Yet the human dimension is paramount for all the talk of architecture and analytics. At the sharp end of decarbonisation are subject matter experts (SMEs) whose decisions, if adequately supported, can shift emissions profiles in real time. The digital twin becomes their cockpit.
“Subject matter experts need access to structured, contextualised data that matches their domain knowledge,” Harclerode explains. “You cannot expect them to become data scientists overnight.
But you can give them the tools to act decisively.” This is where self-serve analytics becomes essential. Dashboards and alerting mechanisms grounded in streaming data allow SMEs to intervene early, before excursions become events. Operational data becomes a living asset, not a buried one. And with digital twins enabling direct feedback between design parameters and realworld performance, such as pump efficiency curves versus actual flow rates, every decision becomes more informed.
Moreover, the twin evolves as operational teams become more accustomed to digital tooling. It becomes a learning system, continually refined through the outcomes it helps shape. AI plays a supporting role here, not as a replacement for human judgment, but as an enhancer of it. And this is where the conversation around AI must be reframed. AI is not a chatbot or novelty algorithm in oil and gas. It is embedded intelligence within operational processes. It is predictive maintenance powered by thousands of data points. It is emissions forecasting that reacts in real time to process adjustments. It is not
separate from the twin. It is fused into it.
From data lakes to living models
Too many oil and gas firms still rely on unstructured data lakes that turn insight into archaeology. These systems hoard information but lack the integration and intelligence to convert it into an operational advantage. Digital twins offer a different paradigm.
Tightly coupling design models, operational data, and advanced analytics allows companies to move from historical reporting to predictive action. They create a live model of the plant that reflects reality and helps shape it.
Harclerode puts it succinctly: “The engineering digital twin shows what should be happening. The operational twin shows what is happening. But the integrated twin tells you what to do about it.” That synthesis is where competitive advantage lies. As capital becomes more selective and regulatory pressure mounts, the oil and gas companies that succeed will not be those that simply monitor emissions. They will be those that can manage them proactively, continuously, and intelligently.
Ahead of time
Rewriting the reservoir with AI to accelerate carbon storage and oil production
Physics-informed neural operators bring speed and intelligence to subsurface modelling. AI transforms how companies simulate complex underground systems, enabling faster decision-making for oil production and carbon storage
AI is beginning to unearth answers buried deep below ground, reshaping how energy companies model, manage, and optimise the reservoirs that underpin fossil fuel production and carbon storage. Physics-informed neural operators are at the heart of this shift, offering a faster, data-driven alternative to traditional simulators while retaining physical realism and predictive power. When carbon capture and storage (CCS) is seen as critical to net-zero targets and the efficiency of existing oil and gas infrastructure is under intense scrutiny, the
ability to model subsurface behaviour with precision, speed and confidence has become an urgent priority. The pressure to act is not merely economic. Geological complexity, legacy data systems, and environmental constraints are converging into one of the most computationally intensive problems in the energy transition. It is here that TotalEnergies is making its move.
From equations to operators
For decades, reservoir simulation has relied on solving complex partial differential equations that describe fluid flow through porous media. These methods are trusted and accurate, but they are also slow, resource-intensive, and inflexible.
According to Elias Cherif, Data Scientist at TotalEnergies, that is no longer sustainable. “Subsurface media are highly non-linear and chaotic dynamic systems,” he says. “A slight change in permeability can drastically alter CO2 plume behaviour. Using traditional high-resolution simulators, such as a one-metre grid over a 100-square-kilometre area, involves millions of calculations.
“Machine learning and data science offer an alternative. These approaches directly learn the relationship between inputs and outputs, bypassing the need to solve all underlying equations explicitly. This allows for the development of models tailored to specific geological contexts.”
This leap is enabled by a confluence of factors: the sheer volume of available sensor and simulation data, rapid advances in AI architectures, and a policy-driven need to accelerate CCS deployment. The goal is no longer to replace physics but to encode it. This is the promise of physicsinformed neural operators (PINOs), a class of AI models that incorporate governing equations into their architecture to enhance learning and generalisation. Speeding up carbon storage simulation TotalEnergies began by applying Fourier neural operators (FNOs) to model CO2 injection and migration over time. The task was to forecast pressure and gas saturation in offshore CCS sites across a thirty-year horizon with sufficient accuracy to ensure containment.
“Let me first summarise how a traditional simulator works,” Cherif says. “It involves three steps: inputting the reservoir’s geological model, rock properties, and injection plan; performing numerical simulation by solving Darcy’s law and mass conservation equations using finite difference methods; and producing
outputs such as pressure distribution, fluid saturation and production rates. While accurate, this approach is computationally expensive and unsuited for rapid scenario testing.”
Instead, the neural operator model learns from thousands of prior simulations. It transforms permeability, porosity, and other inputs into a frequency domain using Fourier transforms, allowing it to model global interactions without solving the full system of equations at each step.
Training on a public dataset from Imperial College London, which includes 24 snapshots over three decades, the team developed separate models for pressure and saturation. They selected tailored loss functions to match the physical characteristics of each variable. Pressure, governed by elliptic equations, was optimised with pointwise squared error. Saturation, affected by sharp fronts and governed by hyperbolic equations, required a more nuanced approach that penalised errors at the plume edge.
“Training was performed using the NVIDIA Modulus framework with 2,000 examples over 100 epochs on an H100 GPU,” Cherif continues. “The model achieved inference speeds 1,000 times faster than traditional simulators while maintaining high accuracy on test data. The R² scores remained high across all 24 time steps.”
Errors were mainly found at the leading edge of the CO2 plume, where saturation gradients are steepest. However, even these could be mitigated by refining loss functions and expanding training diversity. The results offer a new way to run what-if scenarios at scale, with implications for policy risk assessment and real-time reservoir monitoring.
Solving the inverse with PINO and VCAE
Forward prediction is only half the challenge. Understanding how historical production maps to underlying geology is crucial for CCS and oil recovery alike. The problem is reversed here: infer rock properties from surfacelevel production data. TotalEnergies tackled this with a combination of PINO and a variational convolutional autoencoder (VCAE).
“The forward problem involves predicting pressure and oil saturation from known parameters such as permeability and porosity,” Cherif explains. “This enables computation of oil and water production rates. The inverse problem is more complex, requiring determining unknown geological characteristics from limited observed data.”
Using the black oil model in a synthetic reservoir of 32,000 cells, the team simulated a two-phase flow, oil and water. With only six wells (four producing, two injecting) and significant geological variation, the training set comprised just 600 examples. Nevertheless, the PINO model’s sequential architecture, which uses previous time steps as inputs to predict the next, allowed it to capture complex dynamics without overfitting.
“Only the initial pressure and saturation values at t = 0 are needed to begin testing,” Cherif explains. “The model then propagates sequentially through all 51 time steps. Flow rates are updated using Boussinesq’s equation to maintain physical consistency.” Despite the increased training time compared to the FNO model, PINO produced better accuracy, especially in long-horizon predictions. It proved particularly adept at preserving physical
constraints, making it a viable candidate for integration into decision-making workflows.
Managing uncertainty with latent space inference
The final challenge was reconstructing the permeability field, a classic ill-posed inverse problem. With more unknowns than observations, traditional methods tend to become unstable. Errors compound, particularly if the forward simulator is imperfect. To solve this, TotalEnergies employed a VCAE to reduce the dimensionality of the permeability field and encode prior geological knowledge into a latent space. Each permeability map is represented as a set of latent variables, which can then be sampled, manipulated, and decoded.
“The model has three components: an encoder that extracts latent variables from the input, a reparameterisation step that creates the latent space distribution, and a decoder that reconstructs permeability fields from latent vectors,” Cherif explains. “This enables generation of new, plausible reservoirs not found in the original dataset.”
The team used the adaptive regularised ensemble Kalman inversion (AREKI) algorithm to match historical production and update latent variables iteratively. Each step compared simulated outputs with real production, adjusting the latent space accordingly. The result: in just 40 minutes on a single H100 GPU, the model generated permeability ensembles that aligned closely with observed production at all wells. The time savings are significant compared to eight hours with a conventional simulator.
“Comparing the reconstructed permeability map with the true geological model, we found strong similarities, particularly around the well locations,” Cherif continues. “Gaps in reconstruction occurred in areas with no wells, where observational data were lacking. Nonetheless, the main geological features were successfully recovered.”
Toward faster, more agile workflows
While these models are not poised to replace traditional simulators altogether, they are beginning to reshape the boundaries of what is computationally feasible. AI-powered surrogates offer a compelling balance of speed and fidelity for tasks such as scenario generation, history matching, and optimisation. However, challenges remain. Training still demands highperformance computing, particularly for real-world reservoirs with millions of cells. Memory constraints, data quality, and the generalisability of learned models also present limitations. Parallelisation across GPU clusters and integration into hybrid modelling pipelines will be key.
“Traditional simulators will continue to be the reference for high-fidelity modelling,” Cherif concludes. “The aim is not to replace reservoir engineers but to provide complementary, AI-based tools that offer new approaches to subsurface modelling.”
As AI models become more sophisticated and compute infrastructure continues to scale, subsurface workflows will shift from being a computational bottleneck to a strategic asset. For companies navigating the complexity of hydrocarbon extraction and carbon sequestration, the ability to simulate the unseen may be the most powerful tool.
Main Digital Contractor –a better approach to data delivery
Data handover failures in oil and gas projects are still far too common, often undermining the performance of even the most sophisticated digital assets. By appointing a Main Digital Contractor, operators can embed data integrity from the outset and ensure the transition from project to operations is seamless, efficient and value-driven.
By Rob Kennedy, Global Director, Digital Asset and DataOps, Wood, and Michael Edwards, Head of Data Partnerships and Platforms, Wood
Successful oil and gas capital projects aren’t just about delivering on time and within budget. They also require a flawless start-up and ramp-up of the asset. As operations become increasingly digitalized, the need for complete and trusted data has never been greater. Yet, data handover from project to operations remains a significant challenge. By engaging a Main Digital Contractor and treating the digital asset as a critical element of the physical asset, the quality of the data handover can be significantly enhanced.
The data handover challenge
Capital projects are leveraging increasing levels of digital technology to enhance efficiency, reduce rework and mitigate risk. Despite the growing adoption of data standards and data-centric practices, challenges with handover to operations remain, with incomplete datasets, inaccuracies and inconsistencies adding to late delivery.
Many operators have successfully integrated digital twins into their operational workflows, where they are key enablers for safe, reliable, optimized and profitable operations. For new assets, the start-up, ramp-up and early operations period is crucial, with delays potentially costing millions of dollars per day. Data handover issues can impede the proactive management and optimization of the asset during this critical period, creating knockon consequences.
Operations teams are often left to address data issues from handover. The cost to rework datasets, extract additional data from documentation and rectify the 3D model can be substantial, and is a hidden cost that isn’t budgeted for.
The industr must find a better way to manage data on capital projects, that simultaneously delivers benefit to the project and ensures a high-quality and timely handover to operations.
Data standards alone are not enough
Selecting an appropriate data standard is
clearly important, but its successful implementation is imperative, requiring buy-in and prioritization across the project. However, many project teams, contractors and suppliers view data handover as a burden, perhaps due to a lack of understanding of how that data will be used to deliver value.
Work to achieve a successful data handover should start in the select phase. A lifecycle digital strategy for the asset should be developed, shaping activities in the project phase and into operations. Early engagement of project stakeholders and contractors is essential and aims to foster a mindset change whereby data is treated as an important deliverable.
In the define phase, data use cases in the project and in operations should be identified, including any unique or particularly valuable areas for the specific asset. This allows an appropriate data standard to be selected, and any additional attributes to be incorporated as needed. Data requirements then need to be cascaded across the project, implemented by engineering teams and incorporated into contracts. Importantly, there must be sufficient commercial incentive (or penalty) to ensure that contractors meet these requirements.
Data gathering and assurance begins in the execute phase. In addition to the existing practice of validating data against the standard and checking completeness, it’s critical to ensure that there are no data gaps due to missing tags. Field verification of data and models should also be performed to enable as-building.
Finally, to prepare operations to receive the data handover, a suite of dedicated processes and procedures are required to sustain and maintain data over the asset life.
A successful data handover requires significant effort and strong engagement from all parties involved, including contractors and suppliers. Having a dedicated data team to drive this process across the project and supply chain will significantly improve the quality of the data handover, and help to mitigate the risk and cost of re-work.
Appointing a Main Digital Contractor
A Main Automation Contractor (MAC) is often employed on capital projects to provide a unified approach to control systems from the define phase through to operations. Similarly, a Main Digital Contractor can enhance digitalization on the project by coordinating all stakeholders and managing all digital and data activities. Starting in the select phase, the Main Digital Contractor leads the creation of the lifecycle digital strategy for the asset, with significant influence from operations. Then, in the define phase, they identify and prioritize
digital initiatives and document the associated data requirements, overseeing adoption by the project and inclusion in contracts.
In readiness for execute, a ‘pre-operations’ digital twin is deployed. This serves as the central data hub for the duration of the project, enabling data exchange with completions and commissioning contractors and operations readiness teams, before facilitating a seamless transition to operations.
The pre-ops twin is utilized by the Main Digital Contractor throughout the execute phase to gather, manage and validate data from each contractor, utilizing data pipelines between the pre-ops twin and each contractor’s own digital environment.
Data validation, completeness and consistency issues are highlighted in the pre-ops twin to enable their prioritized close-out. Then, to assure data accuracy, verification tasks are incorporated into commissioning inspection test records, with findings being used to as-build data and models.
As the operations phase approaches, the pre-ops twin is utilized to prepare data and models for handover, accelerate the maintenance and integrity builds, and load data into operational systems. This enables these systems, and their connection to the operational digital twin, to be tested in readiness for start-up.
Finally, before handover the Main Digital Contractor establishes a data operations, or DataOps, team to govern, update, maintain and manage change to data, ensuring that operations teams receive high-quality, actionable data.
Conclusion
To ensure successful data handover from the project, operations need to actively influence the lifecycle data requirements and processes from the outset. A Main Digital Contractor is essential in this process, engaging project and operations stakeholders and driving a consistent digital approach across the project. As oil and gas operations become increasingly digitalized, the value of high-quality, trusted data is becoming more evident. Remediating existing data for operating assets to achieve the necessary level of data quality requires considerable cost and effort – for new assets it is therefore imperative that we gather and handover the right data. Just as we wouldn’t accept a physical asset requiring extensive rework before operations, we must hold digital assets to the same standard.
As your Main Digital Contractor, we drive digitalization across your capital projects, delivering digital-native assets that advance ways of working, enable operational excellence and deliver lasting value.
See what’s possible: woodplc.com/digital
Digital Trust: Qualification and Assurance of Digital Twins and AI Technologies
Why the energy industry needs to verify digital twins and AI technologies to remain trustworthy over time
By: Ove Heitmann Hansen
The rapid evolution of AI and data-driven technologies is reshaping the energy industry in real time. With rising complexity, automation, and regulatory scrutiny, organizations are under increasing pressure to demonstrate that their decisions—often guided by intelligent systems—are not only efficient but trustworthy.
This isn’t just a technical challenge; it’s a leadership necessity. How do you know you’re getting a real return on digital investments? How do you assure shareholders, regulators, and the public that critical decisions supported by AI and digital twins are grounded in reliable, high-quality data? Digital twin technology has emerged as a keystone in this transformation—not just as a source of operational insight, but as a tangible expression of a company’s commitment to trust, performance, and accountability. Yet for all its promise, adoption remains uneven. Without trust and continuous verification, digital twins remain underleveraged—and in high-stakes sectors like energy, that trust must be earned.
A digital twin is more than a static model; it is a dynamic, evolving virtual representation of a physical asset or system. When done right, it becomes a tool for improving asset performance, enhancing decision confidence, reducing costs, and ultimately, enabling responsible autonomy. At the center of this evolution is the concept of digital trust: confidence in the data, the models, the algorithms, and the outcomes. DNV’s approach to digital twin assurance is designed to meet this demand head-on—combining decades of domain expertise and subject matter experts with structured methodologies that deliver confidence across the energy value chain. DNV has developed a structured qualification and assurance process, based on industry best practice, to support the energy industry to be able to verify and establish trust in digital twins and AI technologies that remains valid over time.
From experiments to enterprise tools
Digital twins are no longer just digital replicas—they are becoming integrated, operational systems. Whether applied to pipelines, gas turbines, offshore assets, or energy storage platforms, today’s digital twins are embedded into digital operations at an ever-increasing scale.
In grid infrastructure, digital twins are being deployed to simulate distributed energy resource (DER) behavior in real time, enabling dynamic load balancing and predictive failure detection. These use cases depend not just on physics-based modeling, but on advanced data fusion — combining sensor telemetry, historical performance data, and probabilistic modeling techniques.
In pipeline networks, digital twins are vital to ensure the highest level of operational safety, as well as extending the life of an aging infrastructure by performing enhanced analytics and modeling against verified and integrated data. The vast pipeline footprint and advancing technologies have created an overwhelming burden on the workforce to effectively utilize all the information they have and continue to collect at an increasing rate. Digital twins help synthesize these large datasets more accurately and quickly, thus enabling pipeline operators to focus on making time sensitive and trusted decisions that directly affect safety, financials, and asset life.
Assurance as a prerequisite, not an option
If digital twins are to guide investment decisions, influence control logic, or shape maintenance schedules, they must be robust and defensible. This is where assurance becomes not optional — but essential.
DNV has pioneered methodologies for digital twin assurance that incorporate model validation, verification, and explainability. At the core is DNV-RP-A204: Assurance of Digital Twins, a Recommended Practice that provides a structured approach to evaluating:
• Fidelity: How accurately the twin reflects real-world behavior across operating conditions.
Robustness: Its performance under edge cases or degraded data inputs.
• Data provenance: The traceability and trustworthiness of data inputs. Lifecycle alignment: How well the twin adapts to asset changes or environmental shifts.
These dimensions align closely with the foundational elements of digital trust—accuracy, accountability, and transparency in digital systems. For regulated sectors—hydrogen, nuclear, or carbon-intensive industrials—this kind of assurance is not just a best practice; it’s a prerequisite for regulatory
acceptance and stakeholder confidence.
Where digital twins can add real value
Too many digital twin initiatives stall at the dashboard stage, where they do provide some insights into asset performance. But the real value is unlocked when the twin becomes part of the decision loop — driving smarter operations, reducing unnecessary maintenance, and delivering measurable gains in efficiency and cost effectiveness.
A predictive twin of a gas compression system, for instance, can ingest vibration, pressure, and thermal data to identify early signs of rotor misalignment. When paired with cost-impact modeling, it doesn’t just predict failure — it optimizes intervention, minimizing downtime while maximizing return on operational spend. For pipeline operations, the biggest hurdle is data availability and alignment. Digitizing historical paper records is an ongoing industry struggle, and verifying their accuracy once digitized is resource intensive.
Large Language Models (LLM) and AI are assisting in the extraction and automation of data alignment, and digital twins are being built that connect these aligned datasets that were once siloed, into a central repository of
contextualized asset information which includes inspection, maintenance, operational, geospatial data and many more.
Utilizing digital twin and AI technologies in pipeline projects can today automate data alignment of numerous, unconnected datasets. This created an opportunity to perform advanced analysis of cathodic protection data that was not possible previously. Digital twins enable us to generate optimized corrosion prevention and mitigation strategies in a fraction of the time and effort, prescriptive decision support, in addition to delivering actionable dashboards to our clients.
In grid operations, prescriptive twins are already informing real-time decisions on load dispatching, weather-triggered demand surges, and grid resilience under stress. These use cases deliver measurable ROI — but only when they feed directly into ERP systems, maintenance planning platforms, or automated control logic.
Pipeline operators are leveraging digital twins to model complex interactions between mechanical loading, environmental conditions, and material fatigue over time. By continuously assimilating live SCADA data and inspection records, they can extend inspection intervals, predict maintenance, reduce unplanned downtime, and quantify remaining useful life with greater precision. Embedding digital twins in critical decision-making pathways also elevates the stakes for digital trust. Stakeholders must have confidence not only in the underlying data and analytics, but also in the governance frameworks that support responsible system behavior.
AI, algorithms, and the new era of modelling
That assurance is even more essential as digital twins move beyond rule-based logic and begin incorporating adaptive, data-driven intelligence. AI will not replace engineering expertise, but it can extend it.
The most effective digital twins today use hybrid modeling — combining physics-based models with data-driven machine learning components. This preserves interpretability while enabling adaptive learning. DNV emphasizes this hybrid approach, especially in scenarios where first-principles modeling
alone cannot account for nonlinear, emergent behavior — such as on offshore platforms, where environmental and operational uncertainties interact in complex ways.
As AI capabilities grow, so too must the continuous assurance frameworks. Trust in the twin must extend to trust in the algorithm. Regulations such as the EU AI Act are beginning to formalize expectations around transparency, explainability, and accountability in AI-driven systems—many of which apply directly to digital twins operating in high-risk domains like energy. DNV’s assurance practices, including DNV-RP-0510 and DNV-RP-0671: Assurance of AI-enabled Systems, help organizations prepare for this regulatory shift by embedding trust into AI from the ground up.
This intersection of AI and assurance lies at the heart of the digital trust agenda: ensuring that algorithmic decision-making remains auditable, accountable, secure, aligned and valid over time.
Connecting the dots across the lifecycle
A digital twin is only as useful as its context. It must connect — across departments, systems, and time. That means building a digital thread that links all relevant data, decisions, and model states across the asset lifecycle. To support this, DNV champions interoperability and open collaboration through initiatives like the Digital Twin Consortium and the Open Simulation Platform. These help industry players avoid vendor lock-in and accelerate cross-platform deployment.
Security and data governance are equally vital. As digital twins gain more autonomy and access to control systems, cybersecurity risks must be assessed not as IT issues but as operational integrity risks. DNV’s assurance frameworks and proven methodologies increasingly integrate these dimensions to future-proof critical systems — strengthening the digital trust that underpins connected operations.
Digital trust will define the next generation of energy
The trajectory is clear. Digital twins are evolving from dashboards to agents — capable of coordinating with other twins, utilizing AI, adapting in real time, and shaping operational outcomes. But with growing influence comes growing scrutiny. Energy operators, vendors, regulators, and investors must be able to ask: Can I trust this model? What data is it using? What are its limits?
That’s why the next frontier in digital twins isn’t just technical. It’s ethical, procedural, and institutional. It’s about building systems that are not just smart — but accountable. It’s about embedding digital trust as a core design principle, not a retrofit.
Digital twins and AI technologies in the energy industry is no longer aspirational — it is operational. Yet its full potential hinges on trust: trust in the models, in the data, and in the insights they generate. By championing assurance frameworks such as DNV-RP-A204, hybrid modeling, and digital governance, DNV is helping energy leaders transition from digital trial to digital transformation. Our statement is, “If you cannot TRUST your digital representation, technology or data, you cannot use it in the real world.”
The future of energy won’t run on data alone. It will run on digital trust.
Industrial Digital Twin Association
We make Digital Twins Industrial Digital Association
Industrial Digital Twin Association
Industrial Digital Twin Association
ASSET ADMINISTRATION SHELL (AAS) –THE STANDARD FOR THE DIGITAL TWIN
We make Digital
We make Digital Twins
We make Digital Twins
We shape the future of the Digital Twin together.
We shape the future of the Digital Twin together.
We shape the future of the Digital Twin together.
We implement industry requirements for the Digital Twin.
We shape the future of the Digital Twin together.
We requirements for
We connect industry know-how for a common solution.
We implement industry requirements for the Digital Twin.
We implement industry requirements for the Digital Twin.
We establish an international standard.
We connect industry know-how for a common solution.
We an standard.
We connect industry know-how for a common solution.
We connect industry know-how for a common solution.
We demonstrate valuable use cases.
We establish an international standard.
We establish an international standard.
We demonstrate valuable use cases.
Reshaping the future of work in energy
Real-world applications and use cases have emerged beyond the buzz and hype of AI. Natural language processing (NLP), large language models (LLM), hybrid machine learning (ML), generative and agentic AI, all represent a litany of transformational technologies.
The possibilities for our digital future within the energy industry are endless:
• Faster, more reliable data processing and verification, with more sophisticated data-sharing infrastructures
Connected data sources as a foundation for accelerated innovation and collaboration
Increased autonomy and automated services in operations and maintenance
Integrated physics-based and data-driven digital twin models that together with AI augment human decision-making
Improved asset performance management and reliability for plants from upstream to downstream
• Bi-directional data flows between systems and equipment, with Generative AI enhancing interaction to increase situational and operational awareness
Supply chain transparency and traceability across the energy value chain
• Predictive analytics for improved energy efficiency
Considering the potential unlocked by an AI-driven digital transformation strategy, both immediate and long-term advantages become clear - ranging from faster data processing and improved information flow to reduced risk, enhanced communication, and greater automation. But the real challenge lies in moving beyond the hype to thoughtfully integrate these advanced technologies into daily operations in a way that aligns with industry needs. The key question is how AI will become embedded in routine business processes, supporting everything from workflow automation and data analysis to decision-making and customer engagement, ultimately reshaping how organizations operate and deliver value.
Moving beyond data and dashboards
What matters most is not just the data you have but how you use that data. Data standards are an important part of our digital future, enabling companies to collaborate and co-innovate through system interfaces that integrate in the back end. Many progressive companies have put in place a solid data infrastructure and added select applications like a digital twin on top, making data more contextualised and accessible through simplified dashboards that make data easy to find, filter and apply.
However, data and dashboards are not enough of a springboard for AI to have the measurable value or ROI that companies expect. Instead, we need to start with a value-focused approach that zooms in on the specific use cases and services where AI can have the most influence through a digital operating model that builds on digital twin technology backed by physics-based and data-driven models. The successful implementation of an AI-infused digital strategy needs to be driven by desired business outcomes.
Driving transformation with a value-focused approach
Effectively leveraging AI to move beyond “business as usual” in the energy sector requires deep familiarity with the industry’s evolving landscape. As a technology provider with both domain and technical expertise, we see the greatest potential for AI-driven value creation in several key areas
• Safety
Operations and Maintenance
Performance Monitoring
• Supply Chain Management
Design and Engineering
Emissions Management
Once specific services within the potential high-impact areas are identified, typically those that are frequent and generate consistent, repeatable data patterns, they can serve as ideal candidates for AI-driven automation. These patterns enable AI and related technologies to support the development of value-focused applications that extract and process information, generate insights, and provide actionable recommendations. Many of these actions can be executed autonomously, ultimately contributing to greater energy efficiency.
Beyond the hype, it’s essential to keep people at the core of operations, supported by technology that can seamlessly handle diverse data types and sources. In this model, technology serves as an enabler—delivering the right amount of information to the right person at the right time. This enhances decision-making speed while reducing risk. Depending on the use case, the resulting benefits can scale significantly, from reduced emissions to earlier interventions in predictive maintenance scenarios.
A glimpse into the AI-driven future of energy operations
Consider a methane emissions management workflow. An emissions reduction team oversees 10 assets for a major exploration and production company, continuously monitoring each site’s carbon footprint. Their central tool: a cloud-based, dynamic digital twin that provides access to a configurable emissions management cockpit. This cockpit goes far beyond static dashboards, it highlights the highest energy consumers in real time, flags critical incidents requiring immediate attention, and suggests recommended actions based on live data streams enriched with historical and synthetic datasets.
At one facility, the cockpit detects that a main gas turbine is consuming significantly
more energy than expected, pushing up the site’s overall emissions profile. The team initiates an investigation by querying the digital twin through an integrated AI-powered chat interface, retrieving targeted insights in seconds.
With the relevant data in hand, they virtually navigate to various system components, such as flare stacks and vents, to pinpoint the issue. Behind the scenes, complex data processing and integration are handled seamlessly, allowing the team to focus on decision-making. They consult a simulator view within the twin to compare real-time and modeled values, receiving prescriptive guidance on where and how to intervene. Armed with this clarity, the team can act quickly, supported by both actionable instructions and transparent rationale
Transforming the future of work in energy
Data becomes insight. Insight drives action. Action delivers outcomes. And outcomes evolve into prescriptive tasks, clear, traceable, and executable, enabling teams to move with confidence and transparency. This cycle supports a wide range of use cases, each aligned with business goals and value-driven outcomes. Forward-thinking energy companies are already embracing this shift. It’s not just better than business as usual, it’s the foundation for a smarter, more sustainable future.
AI surrogates and simulation breakthroughs are transforming oil and gas engineering
Integrating AI and accelerated computing is redefining simulation in oil and gas from complex subsurface modelling to real-time methane leak detection. Engineers are no longer constrained by hardware limits or slow workflows; cloud-native tools and GPU-powered platforms enable a new era of productivity and precision.
For decades, oil and gas engineers have been trapped in a cycle of compromise. Simulations were essential but slow, limited by CPU hardware, and often unable to model the physics of the real world with the fidelity required to make truly optimal decisions. That compromise is over. A new generation of cloud-native, AI-enhanced, GPUpowered platforms is eliminating the bottlenecks of traditional simulation and opening new frontiers in predictive maintenance, emissions control, and reservoir optimisation. This shift is not about marginal gains. At SLB, Gocha Chochua, Technical Advisor, describes a project that reduced a 180-day simulation to 18 days, using NVIDIA A100 GPUs on a Rescale cloud platform, cutting costs and emissions while accelerating delivery. In another trial using the H100 chip, SLB saw performance gains of 13 times with a twentyfold drop in cost. “You can get it faster, and at the same time, it will be more efficient, more cost-efficient, and more time-efficient,” Chochua explains. “The more you buy, the more you save.”
The company now runs coupled multiphysics solvers that combine high-fidelity computational fluid dynamics with discrete element modelling, once considered computationally prohibitive. These are deployed at scale to design production systems such as intelligent flow control valves operating kilometres below the seabed. At this level, failure is not an option. A single malfunction can jeopardise an entire offshore field. “There were no test facilities in the world that would be able to test that well,” Chochua says. “We had to simulate everything. For one deployment in Brazil, we had to qualify equipment where the flow conditions made traditional physical testing impossible. We ran particle simulations where billions of sticky particles would accumulate, risking blockages that prevent the valve from operating.”
With traditional in-house compute, these simulations were infeasible. However, with hybrid workflows using CPUs for fluid dynamics and GPUs for particle transport, SLB could simulate, optimise, and deliver to the deadline. The result: SLB won the tender, providing a
field capable of producing over one million barrels daily.
AI at the edge
SLB’s work is not confined to the wellhead. The company is building AI-enabled field tools running on lightweight systems that support methane leak detection at the edge. Methane is 80 times more potent than CO₂ over 20 years, and accounts for about 30 per cent of the global warming impact of greenhouse gas emissions. Identifying and responding to leaks is both a regulatory and operational priority.
Here, simulation is again essential. Using local sensors and emissions stations, SLB can collect wind direction, speed, and other variables to predict where a leak is and how much gas is escaping. Running high-resolution CFD simulations on-site would take an hour per scenario, even in coarse meshes, and field operations need multiple simulations to solve a reverse problem from limited inputs. The answer is to pre-train a reduced-order model that can make nearinstant predictions.
“This model can be deployed on the edge, and it runs in a fraction of a second with enormous speed,” Chochua says. “You do not need a big machine. You can look at it in the field using a laptop or some simple hardware.”
Visualisation is a critical component of trust. SLB combines these reduced-order models with real-time visualisations that show the shape and path of methane plumes, often distorted by terrain and
structures in ways that simple Gaussian approximations miss. A process that once required a supercomputer now runs as a Python script with coefficients, delivering accuracy, usability, and insight at the point of need.
Accelerating the reservoir model
Another frontier of simulation is the digital subsurface. History matching, modifying reservoir models to align with known production data, is traditionally slow and computationally expensive. SLB has developed physics-informed neural networks (PINNs) that integrate with reservoir modelling software to achieve an order of magnitude improvement in simulation speed.
“We have a history of what is being produced, what is the water cut, what is the gas cut,” Chochua explains. “We modify some parameters of the formation to match this data. This approach achieved a ten times speed-up compared to the traditional reservoir modelling path.”
The core principle is the same in each case: simulation still underpins critical engineering decisions, but AI is changing the scale and speed of what is possible. This is not about replacing physics; it is about enhancing it. These AI surrogates are grounded in first principles and can be verified through high-fidelity reruns. Trust, validation, and iteration are built into the workflow.
The enterprise AI stack
These workflows do not evolve in isolation. For Rescale Founder and CEO Joris Poort,
the challenge is scaling these capabilities across the enterprise. The goal is to put AI-powered simulation into the hands of every engineer, every day. “You have to think in terms of layers,” he explains. “First is the compute layer, where you want to use the most advanced hardware, such as NVIDIA Blackwell. Then you have the data layer, unifying simulation and sensor data, and finally the AI layer, where you structure and tag data to build AI surrogates.”
The value of these AI layers is not only in R&D. Field staff who are not simulation experts can use AI surrogates to make operational decisions in real time, informed by the same scientific rigour that shapes upstream design. “Being able to actually change how engineers do their work every day, that is where the impact is,” Poort adds. “It is not just about speed. It is about access.”
Rescale’s newly launched CAE Hub, developed in collaboration with NVIDIA and tool vendors including ANSYS, Siemens, and Cadence, is central to this shift. It provides turnkey deployments of CUDA-X optimised tools for accelerated computing across all major cloud partners and infrastructure layers. This architecture enables repeatability, manageability, and flexibility at scale.
“There is a perception that these tools are difficult to integrate,” Poort says. “But the workflow is simple. Run the simulations
you are already doing. Organise and structure the data. Train a model. Then use the AI surrogate to run inference in a fraction of a second.”
Simulation as a business advantage
This is not only an engineering advance, it is a business transformation. Chochua points to millions of dollars in competitive advantage that SLB gained from being the only company able to solve specific simulation challenges. These tools also unlocked new commercial offerings, such as SLB’s emissions monitoring solution, now an end-to-end field service enabled by cloudnative AI.
Rescale sees similar patterns across the industry, from supersonic jet development to hydrogen fuel cell design. In each case, AI models reduce simulation time from days to seconds. The benefit is not just speed; it is breadth. Engineers can explore more design options, iterate more freely, and optimise systems previously constrained by hardware and time.
AI surrogates are not speculative tools but operational systems deployed at scale, driving real business impact. “If you could do this for a supersonic jet,” Poort says, “you could probably do it for your use cases too.”
Fuel for the future
Energy remains the constraint that defines all digital innovation. Whether powering data
centres or fuelling exploration, the energy sector must deliver reliable, sustainable, efficient outputs to keep pace with growing computational demand. In this context, simulation becomes a feedback loop, better energy drives better compute, and better compute unlocks better energy.
“Energy is the rate-limiting factor in everything we do,” Poort concludes. “And AI-driven simulation is central to solving that. We are excited to support engineers across the energy sector as they deliver these solutions.” The frontier is not theoretical. It is built today, modelled in seconds, grounded in physics, and proven in oilfields, on platforms, and in real-time operational decisions across the globe.
The health and wellbeing of the workforce offshore is crucial to the success and productivity of the rig, and operators should not only think carefully about the provision they want to supply but also about the skills and role that their medics should carry with them. As demand for oil and gas continues to rise globally, the ability of workers to carry out their tasks will be crucial as medical solutions become vital assets to not only protect workers but to boost productivity and continuity offshore.
FuelSync AI revolutionizes midstream supply chains with real-time insights, predictive analytics, and automation. It cuts costs, eliminates ine ciencies, and deploys in just 6-12 weeks—delivering results in minutes instead of weeks for maximum pro tability and e ciency.
Logistics AI is the most advanced transport management system for eets, using AI to optimize routes, cargo loading, and scheduling in real time. It cuts costs, reduces empty miles, and improves e ciency by over 30%, delivering faster, smarter results than any traditional solution.
V2T’s adaptable digital twin engine is a powerful AI-driven solution that creates real-time, dynamic digital replicas of complex operations across industries. From rail and ports to manufacturing, infrastructure, and borders, it provides unmatched visibility, predictive analytics, and automation to optimize e ciency, reduce costs, and prevent disruptions. Designed for adaptability, it seamlessly integrates with existing systems to transform decision-making and maximize operational performance across any sector.
Teaching machines to think like Shell engineers
Domain adaptation for large language models is becoming critical in oil and gas as companies look to unlock decades of technical knowledge. By tuning AI models to their unique data and language, engineers can move beyond search to contextual reasoning and decision-making.
Training a large language model to speak Shell’s language is no minor feat. The company’s internal knowledge spans everything from corrosion and catalysis to methanol production and materials science. None of it sits conveniently in generic datasets. Much of it is buried in ageing PDFs and inconsistent taxonomies. And all of it carries increasingly strategic value in a world driven by speed, scale, and simulation.
Shell’s challenge is not merely to retrieve this information but to reason with it. The goal is an LLM that can respond like an engineer, not just mimic one.
From search to reasoning
Shell’s legacy data is immense: hundreds of thousands of technical reports, many decades old, written in the specific language of Shell’s engineers, scientists and project teams. Yet, despite this volume, it remains frustratingly underutilised.
“Today, this asset is trapped. “ Injy Sarhan, NLP Researcher at Shell, explains our conventional search methodologies simply do not fit. “So we asked ourselves, how can we turn this legacy of data from a burden into a strategic advantage?”
The answer is an ambitious project to build in-house large language model capabilities tuned specifically to Shell’s technical vocabulary, data, and reasoning needs. Rather than relying on generic models, Shell is domain-adapting open source models like LLaMA to reflect the multi-disciplinary nature of its work. These are not models for answering trivia questions; they are designed to emulate Shell’s thinking. Sarhan’s long-term vision is for these models to understand the context, jargon, and analytical nuance required to navigate Shell’s complex knowledge landscape. “We want these models to think as Shell experts, whether it is context, cross-domain insights, or technical information.”
Collaboration
and compute
To move quickly, Shell teamed up with NVIDIA to combine energy domain expertise with deep AI tooling and infrastructure. Shell brought the data. NVIDIA brought the stack: GPUs, open-source frameworks like NeMo, and years of experience in industrial-scale LLM optimisation.
“The recommendation is to follow our ChipNeMo methodology,” explains Sergio Perez, Solutions Architect at NVIDIA. Based on NVIDIA’s internal work on domain-specific LLMs for silicon engineering, this approach starts with open-source models. It subjects them to domain-adaptive pretraining, followed by supervised fine-tuning and rigorous benchmarking.
Shell applied this methodology across three stages, starting with 1,000 chemistry-focused documents, expanding to include corrosion, and finally scaling up to 154,000 reports spanning 16 distinct technical domains. These documents were processed using NVIDIA’s NeMo Curator and other tools, converted into structured data, and filtered for quality and relevance
Even here, the groundwork is complex. PDF reports had to be parsed, cleansed of duplication and noise, and clustered into thematic domains using hierarchical techniques. The result was a clean dataset of 7.1 billion tokens, including around 20,000 chemistry reports.
Instruction generation at scale
The domain-adapted pretraining set the base. To make the model capable of following complex engineering instructions, Shell then generated 2.2 million synthetic instruction pairs. These were built using a topic-based approach, with the model prompted to generate questions from specific themes or sections of a document.
Scaling this required significant compute. Over the course of two weeks, using 64 GPUs and a 40.5 billion parameter model, Shell processed 1,000 documents per day to build this instruction dataset.
Yet, scale is not enough. The team employed multiple layers of filtering to guarantee quality, including de-duplication using cosine similarity
and, more unusually, using an LLM as a judge to assess instruction quality. Instructions were removed if scores dropped below specific thresholds, ultimately eliminating 14 per cent of the generated data.
Measuring what matters
Shell developed a hybrid evaluation strategy to determine whether the models were learning anything useful. This strategy included public benchmarks like HellaSwag and MMLU to check for catastrophic forgetting and bespoke Shell benchmarks built from both manual and synthetic questions.
Sarhan explains that multiple evaluation strategies were used, including multiple-choice and open-ended questions, each assessed using logprobability calculations across individual tokens and complete sentences. “We cared more about the quality of the questions of the benchmarks rather than the quantity,” she says.
The results are significant. Shell’s adapted eight-billion-parameter models showed a 20
per cent performance improvement over base LLaMA models across Shell’s internal benchmarks, particularly in chemistry. This improvement was carried across related domains, even though the training focused only on chemistry. Visual inspection using Grad-EVAL confirmed that Shell’s models were more concise and contextually accurate in their answers.
Yet Sarhan is clear about the limitations. “We still did not exceed the 50 per cent accuracy, which is overall not so good. So again, we want to see, can we go beyond the 50 per cent accuracy?” Larger models, like the 70-billion parameter versions now in training, offer one potential path forward.
The recipe remains unfinished
Despite the gains, this is not a finished product. Sarhan is candid about Shell still “figuring out the recipe” for optimal domain adaptation and instruction tuning. Which data matters most? What volume of instructions is enough? How should complexity scale? What is the right blend between self-supervised and supervised approaches?
The team is now experimenting with more complex forms of instruction, cross-paragraph and cross-document question answering, retrievalaugmented fine-tuning (RAFT), and eventually model alignment through RLHF and direct preference optimisation.
The ultimate ambition is a Shell-specific model capable of accurate, multi-domain reasoning grounded in decades of knowledge and complex interdependencies. And the ambition is not just internal. Sarhan believes this work can advance LLM capabilities across the entire energy sector.
“If we do this right,” she says, “we will not only advance Shell’s AI capability, but we will also make sure that we are advancing AI in the energy sector overall.”
Redefining AI’s role in technical domains
Shell and NVIDIA have not built a consumer chatbot with an oil company skin. It is a serious industrial experiment in AI cognition that aims to turn scattered technical documentation into actionable insight at scale.
This is not simply about making AI more useful for Shell. It is about making Shell’s engineers more powerful by training machines to think like them. The questions still outnumber the answers. But as both Sarhan and Perez make clear, the payoff for getting it right could be transformational for Shell and the future of AI in every knowledge-heavy domain.
Digital reservoirs and intelligent rigs
Mark Venables, editor of Future Digital Twin explains how AI’s growing grip on oil and gas operations is not just optimisation – it is reinvention
Oil and gas companies have long operated at the threshold between innovation and caution. High capital intensity, volatile markets and safety-critical environments demand precision. But they also demand vision. Artificial intelligence and, increasingly, generative AI are forcing the industry to shift from cautiously digitising existing processes to fundamentally reimagining how energy is found, extracted, refined and delivered. This is no longer about squeezing more from mature assets. It is about transforming the full life cycle of upstream, midstream and downstream operations.
The sector’s digital awakening has been slow by design. Unlike finance or retail, the cost of failure in oil and gas can be catastrophic. However, as AI models become more interpretable, infrastructure more scalable, and results more visible, that resistance is evaporating. In its place is a new conviction: that AI is the tool not just to optimise, but to out-think complexity at every stage of the value chain. As a reservoir optimisation lead at a supermajor recently told me AI enables us to make decisions based on full-field insight, not just historical rules of thumb. In some cases, what would have taken us weeks of simulation, we can now explore in hours, with more variables and greater accuracy. Upstream, this influence begins with exploration. Generative AI models trained on seismic data, well logs and production history are being used to identify drilling prospects that traditional geoscience workflows might miss. These are not mere overlays or recommendation engines. They are hypothesis generators, capable of producing plausible geological interpretations, evaluating reservoir performance and updating models in real time as new data arrives.
At the drill site, AI is stepping in to analyse downhole sensor data, automate geosteering and detect anomalies before they become non-productive time. For offshore operators battling narrow drilling windows and costly logistics, this has tangible impact. As one drilling engineer put it: Operators used to look at a string of numbers on a dashboard and hope they were interpreting it right. Now the system tells them what they need to know – or more importantly, what they do not yet know.
That same shift is happening downstream, where GenAI models are already being tested to manage asset integrity, forecast maintenance requirements, and write natural language reports for inspection teams. While predictive maintenance is not new, what is changing is the ease of access. Large language models can ingest and synthesise decades of engineering documents, failure modes, supplier data and field notes. They then return clear, contextualised recommendations in a format
that any technician or supervisor can act on.
One operations manager at a Middle East refinery noted that integrating GenAI into inspection workflows halved the time to prepare shutdown scopes and increased the detection rate of known failure types. “We are not just reacting faster,” they said. “We are prioritising better. That has a direct impact on availability, throughput and safety.”
Crucially, these developments are being embedded not just in pilot projects, but across enterprise workflows. What was once a series of disconnected automation initiatives is becoming a cohesive intelligence layer. Energy companies are aligning AI development with their strategic goals – whether that is enhancing recovery, decarbonising operations, or reducing headcount risk in ageing workforces.
There are still challenges. Data integration remains a chronic weakness, particularly across siloed legacy systems. Trust in AI decisions must be earned, not assumed, especially where lives and livelihoods depend on the outcome. And the regulatory and cybersecurity implications of embedding generative models into control systems are only beginning to be understood. But the direction of travel is clear. The oil and gas industry is no longer dabbling with AI – it is designing with it. The next generation of tools will not just support engineers, but shape how engineering itself is done. Intelligent field development plans, real-time carbon accounting, autonomous logistics, synthetic well test scenarios, all are within reach, and in some cases already operational.
This is not simply about digital transformation. It is about cognitive augmentation at industrial scale. And for an industry that has always relied on engineering excellence to unlock difficult resources, that is not a threat. It is a mandate.