

The AI race is on but tie your laces first Page 1
Trust me, I’m AI: Why to build guardrails today
Page 3
Data overview Page 5
Set up for success
Page 13
From panic to play, admin to innovation
Page 19
Get in touch
Page 21
TheAIraceisonbuttieyour lacesfirst.Thisisnota sprint,it’samarathonwith potentiallygreatrewards.
Adoption is happening at a steady pace.
Not even a third (31%) of organisations are using GenAI at the moment but adoption is expected to be widespread, with 72% of companies using it by 2027.
Despite the governance and training required, only a fifth (22%) of senior leaders think GenAI is of high value.
Is this lack of trust in its value because 90% believe it will increase inaccuracy, 86% are concerned about data breaches, and 83% fear not meeting regulatory compliance?
Despite being very aware of the risks of AI, less than a third (31%) of organisations have a risk mitigation strategy in place.
Guardrails and guides build safety and capability.
Employees are using AI. Mistakes are happening risking financial penalties, litigation, intellectual property issues and reputation. Waiting to see what others do and what regulation comes is not a viable strategy. Guardrails are needed now.
With bold leadership, employees innovate confidently, winning competitive edge.
Focus on using AI for administrative tasks, and the extra training and governance required, may cancel out any productivity gains. A governance and innovation gap is opening up between those who realise the potential of GenAI and those who see its impact as limited to mid- and low-value tasks. Competitive edge is at risk.
Take a fresh look at current policies with today’s cha
Map collaborative processes to align teams and stop
Create a decision-making blueprint so AI meets organisational goals and ethics.
Record why decisions are made, ensuring accuracy for regulators, customers and stakeholders.
Develop an AI solutions matrix, providing an overarching understanding of AI use internally.
Get into the details of contracts for AI implementation and usage to spot red flags.
Continue to prioritise culture, as great people and their creative thinking are as valuable as ever.
Embed safe AI practices with training.
Inspire with leadership, by providing clarity on use cases and by upskilling with new solutions.
Treat every element of fast-moving GenAI – from compliance to accuracy to effectiveness – as a continuous, measured cycle.
Do you trust me? It’s an interesting question to consider as you read a report introduction I’ve written! If we’ve worked together, your trust in me is likely a combination of having experienced me delivering on promises, being transparent and doing the right thing – the consistent pillars of trust building. But, for some, there is a perception that GenAI is failing in all three of those areas – often inaccurate, shrouded in mystery and seen as an agent for bad as much as good. AI can feel out of control.
The EU AI Act has landed. More and specific regulation may be on the horizon for the UK, meaning compliance requirements will evolve, just as AI will and just as the world does. I can’t see a point at which we can fully ‘trust’ AI. But AI will transform our operations and interactions over the next decade and fast, so it’s no use waiting to see how the chips fall. We need to set our own guardrails to keep our organisations safe.
Leaders know they risk data breaches and falling foul of regulatory compliance if AI is not used properly. So, when our research showed that not even one-third of organisations have a risk mitigation strategy in place for using GenAI, I was shocked. Every single one should have incorporated AI strategies into their policies and procedures, whether they plan to use it or not (and let’s face it, most do). Employees are using AI today and the longer businesses go without clear guidance, the more mistakes will be made.
Moreover, AI has such huge potential for driving innovation and growth, missing out on that is a risk itself in a highly competitive world. We developed this report to help organisations feel confident about their approach, putting those allimportant guardrails in place, so they are set up to make the most of the possibilities AI unlocks.
Thank you to the senior leaders who took part in our research, either sharing their insights through roundtable discussions carried out in partnership with The Lawyer, or through responding to our in-depth questionnaire on the topic of the future of GenAI.
This report, The critical AI window: Moving from panic to play with confidence, details the findings, revealing where we are in the AI adoption journey. It then goes on to outline the practical steps to protecting and elevating organisations. I hope that after reading it, you trust that the future of GenAI is not overwhelming but an opportunity you can pursue confidently.
Paul Knight Partner, Mills & Reeve
Key points
93% of businesses will likely use AI over the coming years but 90% of senior leaders believe it will increase inaccuracy
87% of senior leaders are concerned about data breaches
83% fear not meeting regulatory compliance yet only 31% of organisations have an AI risk mitigation strategy
Nearly all (93%) of organisations are using GenAI, taking action to implement it or at least considering its adoption. Two-thirds (68%) of the senior leaders we spoke to are familiar with GenAI, so are they using it every day to create value within their organisations? Not yet. You might not think it when you hear the constant hype, but not even a third (31%) of organisations are currently using GenAI. 26% are actively developing a strategy for it, 30% are in the researching and sourcing stage and 13% either don’t know or have not yet considered it.
However, while far from everyone has fully embraced AI, we are well beyond the starting point of adoption. If we plot today’s situation against the standard innovation adoption curve, we can see a large leading pack of innovators, suggesting we are some way into the GenAI organisational adoption journey.
After a spike in innovators, it makes sense that there are fewer businesses in the early and late majority groups. Organisations are developing their strategies and researching their approaches, considering the best use cases following in the wake of the innovators. More sit in the late majority group than the early majority suggesting there is still some caution around using the technology. The level of laggards is consistent with where they are expected to be.
With initial adoption well underway, is GenAI being used to its full potential? 29% of senior leaders say they already use GenAI in a significant way, 31% will do so within 12 months, and 12% the next two years. That’s three quarters (75%) of organisations using GenAI in a significant way, mostly within the next two years. Perhaps the remaining quarter are deciding to wait once regulations are set, or perhaps they do not see value being realised from AI yet.
Proportion of businesses using GenAI in a significant way
And yet, only a fifth (22%) of senior leaders see GenAI as potentially having high value for their organisation.
Nearly half (47%) believe GenAI will only bring them medium value. As many as a quarter (25%) admit to not knowing what value it could bring.
GenAI’s main benefits are seen as being low value tasks. Improving efficiency and productivity and reducing administration are the top two areas where leaders believe GenAI can bring value, followed by cost reduction. Hardly the full gambit of the groundbreaking work it is capable of. Only 11% are thinking about how it can benefit their innovation capabilities.
Is aiming to wait and learn from other’s best practice a valid strategic approach? Our research suggests not.
Leaders who see AI as high value are 52% more likely to be in an organisation in the innovator group, forging ahead with adoption (47% vs 31% overall).
Interestingly, nearly half (46%) of large organisations, with an annual revenue of more than £500 million, are in this innovator group. It makes sense then that leaders in large organisations are also 41% more likely to see GenAI as high value. Not only that, but they are better prepared, as they are 58% more likely to have an AI risk strategy in place.
This all suggests that there are two groups forming.
In one, a handful of innovators, realising the potential of AI, are safely scaling adoption – investing time and energy in high value applications of AI and conscientiously setting up their compliance structures.
In the other, some large and many medium and small organisations who see AI’s impact as limited to mid- and low-value tasks are taking a “wait and see” approach to both adoption and governance, planning to act once there is better understanding of the opportunities and regulations.
However, employees are already sharing personal and sensitive information with GenAI, whether its use has been authorised or not, risking cyber attacks and punitive measures. At the same time, consumers and clients increasingly demand products and services faster and more cost-effectively.
90% of senior leaders in our research are concerned about the risks of AI results being inaccurate. This could go some way in explaining why they only think GenAI can have minimal value. 88% of leaders worry about the impact of bias and more than 8 in 10 are concerned that GenAI risks data breaches, cyber attacks, liability action, IP infringements and failure to meet regulatory compliance. Despite all these concerns, only 31% of organisations have a risk mitigation strategy in place for GenAI.
1.Increased risks of inaccurate results: 90%
2.Bias results: 88%
3.Increased risks towards data breaches: 86%
4.Increased risks of cyber attacks: 85%
5.Increased liability risks: 84%
6.Regulatory compliance: 83%
7.Intellectual property infringements: 80%
8.Internal auditing and documentation requirements: 73%
9.Costs of implementing and maintaining: 73%
10.Internal privacy concerns amongst staff: 72%
The major GenAI concerns of the senior leaders involved in our research can be categorised into four themes...
All organisations can be exposed to inaccuracies through current GenAI options. Take Microsoft, for example, which was criticised for an AI-generated article listing a food bank as a tourist destination and encouraging people to visit “on an empty stomach”. The risk can be higher for trusted brands such as this, or where accuracy is vital (such as for law firms), with potential repercussions including reputational damage, service disruption and litigation.
Security risks caused by GenAI are significant and interlinked. Cyber attacks are becoming more sophisticated and automated. AI is enabling hackers to create more credible and convincing phishing and quishing (fake QR codes) entry points making it increasingly difficult for employees to distinguish between legitimate and malicious content. When employees could share sensitive and confidential information unwittingly through AI, the situation becomes a doubleedged sword.
Even organisations that have governance processes in place face significant security risks from AI, warns Helen Tringham, Partner, Mills & Reeve: “Employees, driven by the excitement of leveraging AI for innovation, may unintentionally bypass established protocols designed to safeguard data and
The explosion of GenAI has seen a raft of regulations introduced across the world, with more likely, and all of them changing as AI understanding develops. In the UK, the regulation of AI relies on existing legal frameworks such as intellectual property, data protection and contract law, highlighting the growing need for regulators and legal practitioners to adapt these frameworks to address the novel risks and complexities introduced by AI technologies.
All this means that there will be no steady state for regulation for some time. Yet, the risk of not complying is significant, both reputationally and financially. Within the EU, under the EU AI Act, for example, violations can cause administrative fines of €35 million or 7% of total global turnover, whichever is greater.
Helen warns employees need clear guidelines to prevent this: “If they don’t fully understand the legal and ethical boundaries – whether around data protection, intellectual property, or equality law – the consequences can be profound. A single misjudgment could expose the organisation to group litigation, reputational damage, and costly legal disputes. In such a fast-evolving area, any organisation could inadvertently become the test case that defines how these issues are handled in law. That kind of scrutiny brings not only financial risk but also a lasting impact on public trust and brand integrity.”
Helen Tringham Partner, Mills & Reeve
GenAI is reshaping every element of employment so it’s no surprise this is a major concern. With some recruitment tools seen to show gender bias, potential discrimination is front of mind for leaders. But GenAI is also reshaping existing jobs and generating demand for new skills, causing concerns for many employees and risking culture and productivity.
As we’ve seen, “wait and see” is waiting to falter. However, getting in shape does not need to be onerous. In most organisations, futureproofed AI compliance requires an update to current systems rather than an overhaul.
A strategic, detailed approach will support in accurately informing regulators and auditors if required, transparently responding to questions from customers and shareholders which will arise, and make the best use of all the AI in play.
Review current policies with AI in mind
Data leaks and cyber attacks are a longstanding concern that pre-dates AI, so most organisations have excellent provision in place already. However, it is well worth reviewing existing frameworks governing data protection, privacy and security-related measures, with the new AI landscape in mind.
Map collaborative processes
Who will be involved in AI decision making and when? Collaboration between procurement and legal teams is the start but anybody with the authority to sign contracts needs to know when the alarm bells should be ringing and when to take advice from experts.
“Don’t get excited about the efficiency savings and ignore the small print,” warns Alison Ross-Eckford, Partner, Mills & Reeve. “Even if the contracts look standard they can include terms that you are unlikely to want to agree to, like signing away liability and intellectual property rights.
One of the trip hazards is that the terms may include a unilateral right to change. That is a red flag. Look at any of the levers you might have as a customer, for instance only agreeing to short terms so you can switch suppliers if needed, or renegotiate if there is an option to terminate.
A blueprint for the AI implementation process can guide teams during decisions and help ensure they meet organisational goals and ethics. Documenting the decision-making process when introducing new options can provide accurate answers when regulators, customers and stakeholders require oversight.
Key information includes:
Objectives
Ethical alignment
Governance required
Data considerations: sources of training data/where it will be processed
Likelihood of disruption events
Alternatives considered and potential backup options
Approach to bias mitigation
Contractual obligations, including length of contract
It’s becoming ever more important given increased regulations and expectations that businesses are transparent and have oversight of all their suppliers. A supplier map showing what AI is being used and what else it is capable of can also help teams to see what is already available and where there may be gaps.
“BeforeimplementingnewGenAI,startwithyourobjectivesandwhatyouwant toachieve,”saysSophieBurton-Jones,Partner,Mills&Reeve.“Dothedue diligencetoestablishiftheproductisgoingtoachievethat.Andifso,lookinto whetherithasthehighestsecuritystandards,dataisbeingprocessed appropriatelyandhowbiasisbeingmitigated.Howdoesthatcomparetoother playersinthemarket?Anddoesitmeetyourethicalandsustainability standards?Predictthequestionsyourowncustomersaregoingtobeasking aroundyouruseofAIanddata,IPownershipandyoursecuritystandardsand protocols.Bereadywithanswersbecauseyou'regoingtobeaskedthose questionsagainandagain.”
Sophie Burton-Jones Partner, Mills & Reeve
EmbedsafeAIpracticeswithtraining
UpskillingwithspecificAItrainingissensibleinmostorganisations,especially aroundsignificantriskssuchasdataprotectionandcybersecurity.Peopleneed totrulyunderstandhowtohandledataandexercisegoodjudgementwhen processingit.Atatimewhendemandsonsomeemployeesandthefocuson productivityishigh,avoidthetemptationtodefaulttoaseeminglyquickfixand placingover-relianceonChatGPT,withoutfullyunderstandingitscapabilities.
Foranyorganisation,clarityonitsdefinitionofAIandasharedunderstanding ofitspotentialachievementsiskey.Guidelinesprovidepeoplewithconfidence andreassuranceinhowtouseAIfortheirwork.Peoplearelookingfor leadershiptorolemodeltheinnovative,yetsecurity-consciousbehavioursand high-qualityoutputstheywanttosee.
“Thereisalotgoingonwithinemployerorganisationstofindbetter,quicker, moreefficientwaysofdoingthingswithAI,”saysMelanieJames.“WhileAI definitelyhasitsplaceinthecurrentandfutureworkplace,thevalueandpower ofhumanintelligenceshouldnotbeoverlookedorunderestimated.
There’sunderstandableconcernthatmanyjobroleswillinfuturebereplaced becauseofAI,butratherthanrolesdisappearing,roleswillevolve,andalong withthat,newanddifferentopportunitieswillpresentthemselves.Those employeeswhoarefuture-focusedandadaptabletochangewilllikelyfare betterthatthosewhoare‘AIhesitant’. Ultimately,peoplewillalwaysbeneeded, sotakecaretoensurethatAIdoesn’triskorcompromiseyourculture,but instead,enhancesit,”addsMelanie.
Errorsandleakscanbeminimisedandperformancefine-tunedwithongoing datavalidationandtesting,alongsidereviewsofend-to-endperformance, alwayswithhumanoversight.
Advisers,especiallylegalexperts,needtoknowthelatestinternational regulationsandbeabletobridgetheunderstandingofunderlyingAI technology.
“WhatItellyoutodaymaynotstillbethecasein12months’time,”saysPaul Knight,Partner,Mills&Reeve.“AllbusinessesneedtostayinformedasAI regulationevolvesandadapttheirpracticesaccordingly.”
FororganisationsusingAIinnewandexistingtechnologysolutions,there areafurthersetofconsiderations.
Newinnovations
“Ifyou’retargetingaspecificindustry,mapoutwhattheirconcernsarelikelyto bebeforeevendevelopingyoursolution,”saysSophieBurton-Jones.“The bankingandhealthcaresectors,forinstance,areworriedaboutsecurityfor differentreasonssoyouneedtounderstandtheparticularregulationsspecific totherelevantsectorsandhowtoensureyourcommercialofferingand contractscanhelpthemcomplywiththose.”
“KeepinmindthedataprotectionprinciplessetoutintheGDPR,”recommends PaulKnight.“Thismeansimplementingappropriatesecuritymeasureswithin products,andmakingsuretechnologyonlycollectsandusesnecessary personaldatathenprocessesitlawfully,fairlyandinatransparentmanner.”
“Withpublic-facingdocuments,includingprivacynotices,tellcustomerswhat informationrelatingtothemyouwilluse,storeorprocessandwhy,which lawfulbasisunderGDPRyouarerelyingon,whothatinformationwillbeshared with,where,andforhowlong.Trackwhatdataorothercontenthasbeenused totraintheAIsystem,andmakesureallnecessarypermissionshavebeen obtained.”
ArobustIPandtrademarkingstrategypreventscompetitorsbuildingonan organisation’smaterialwithexistingintellectualpropertyrightsandenablesthe organisationtoprotectthevalueoftheirowninnovation.
ConsiderhowyouaregoingtomanagetheintroductionofAItocustomers whenmakingchangestoyourproductsandservices,saysSophieBurton-Jones: “IfbringingAIintoexistingsoftwareproductsorprovidingAIasanadd-on module,considerwhetheryouneedtochangeyourexistingtermsand conditions.PreparecontractsinplainEnglishtermsandprovideFAQstohelp allaycustomerconcerns.Makesureyoubuildinflexibilitytobeabletoadapt overtimewhennewregulatoryrequirementscomein,asisusualforother typesoftechnologyaswell.”
“It’s often said that we fear what we don’t understand. If your organisational culture already embraces learning and openness, now is the time to embed AI education into your ongoing development strategy.
Start with the fundamentals: What is AI?
How is GenAI different from earlier machine learning or rules-based systems? How do those differences shape both opportunity and risk within your teams? What tasks can (and should) be delegated to AI, and why?
And, crucially, despite the hype, is GenAI the right flavour of AI to help deliver your business objectives or wider goals?”
Across sectors, our clients are harnessing AI to solve real-world challenges, streamline services and deliver greater value. Here are just a few examples:
AI technology is being used by the British Co proficiency tests – results are generated quickly with AI scoring.
Major UK property consultants are using AI technologies to enhance their services, which include urban design and planning.
A UK government department is implementing AI as part of its support package for citizens wanting to use services it offers.
I work with some technology firms where GenAI is business as usual. As well as developing the actual technology that sits behind everything we are discussing in this report, they are also using it to manage calendars, greet visitors, work out who to hire, understand customers – all by changing how they operate, how they innovate.
But AI right now feels like stepping into uncharted territory – while it’s thrilling for a handful, it’s stressful for most. Every one of us realises that this is a monumental step change, and it can feel overwhelming with all the noise. But rest assured, you are not alone. Despite the seeming rush, we are still in the very early days. In a typical organisation, it will be anywhere between two and ten years before AI is everyday.
Businesses are in the critical window of time where they can set themselves up for success. My advice is: don’t panic, don’t rush but don’t wait either. The biggest risk is the financial impact of being left behind by competitors. This is the time to get it right. GenAI is a tool, it’s inevitable and it’s positive for business. Now is the moment to get your risk and ethical decision frameworks in place, setting the stage to use GenAI for true innovation.
Those that do so will see great gains. The true power of GenAI extends far beyond handling administrative tasks, which is where a lot of organisations are currently focusing. For every element of extra efficiency you gain, potentially there will be the same amount of effort spent on governance, running training and dealing with unlawful applications.
GenAI’s real value lies in its potential to supercharge growth and spark innovation. The challenge is to look past immediate anxieties and small opportunities to AI’s full potential. Using GenAI to engage with customers, empower employees to think creatively and strategically and speed up product development – those are the opportunities worth your time and thinking.
My advice at the start of this new era is to identify what value AI can bring to your organisation, how you would ideally use it and the risk factors. Do your due diligence on bias and standards, making sure that they meet your own. Human oversight, transparency and corporate responsibility are key. Risk assess where you are today and plan ahead, developing robust internal structures to make sure you're following today’s best practice. Keep a record of why decisions were made.
Of course, I have to talk about regulations. I’m seeing many international organisations grappling with multiple sets of legal frameworks and choosing to adhere to the highest standards. To make strategic decisions like this, it is absolutely vital to have advisers who passionately keep track of the changing landscape of AI across the world, and the intricacies of how regional and country legislation play into each other.
It's difficult to know how to control it if you don't know how you want to use it. Within a safe, offline environment, enable play. Develop and implement technology safely, ensuring it won’t result in any unexpected outcomes. Get comfortable as teams and an organisation. See the benefits. And remember, there is a middle ground, a measured approach. You don’t have to go all-in and do something that feels unnatural or too risky for your organisation. AI can be introduced in stages and as you become more comfortable.
In short, set guardrails, do your due diligence, encourage play, horizon scan and keep a close eye on regulation and best practice. While the next few years will be transformational, they don’t have to be turbulent.
Doug McDonald Partner & Head of Technology, Mills & Reeve
sophie.burton-jones@mills-reeve.com
Mills & Reeve’s The critical AI window: Moving from panic to play with confidence research is based on the views of senior executives, particularly general counsel and in-house lawyers, within public and private organisations across the UK.
To shape the research, roundtable events were carried out with senior legal leaders in collaboration with The Lawyer. This qualitative work was followed by a quantitative in-depth survey amongst 321 respondents in 2025.
At Mills & Reeve, our 1,450 plus people and over 850 lawyers share one vision – achieve more together. It’s a state of mind in every client relationship we start and every choice we make. And it’s what clients consistently say distinguishes us from your average law firm.
You can expect a close and attentive working relationship with a team that’s responsive when you need them. You’ll receive advice tailored to your individual needs. Wherever in the world you or your business needs support, we’ll draw on our network to give recommendations.
We’re driven by our values – we’re ambitious, we’re open, we care and we collaborate. We embrace new ideas, communicate honestly and are easy to work with. We’re committed to you, our planet, our communities and each other. For further information please visit the website at www.mills-reeve.com.