
10 minute read
The Risks of AI: A Northern Irish Perspective
AI is reshaping legal practice in Northern Ireland, offering efficiency but posing risks to confidentiality, liability and ethics. Solicitors must act now to ensure responsible, secure and informed adoption.
Artificial Intelligence (AI) is no longer a futuristic concept confined to science fiction or tech labs. It is now embedded in everyday life and increasingly present in the home and work lives of people in Northern Ireland. From automating document review to generating legal advice, AI tools are reshaping how solicitors work and how clients engage with legal services. This transformation brings both promise and peril, particularly for Northern Ireland’s legal community, which comprises legal practices of varying size, specialism and accessibility, and which must navigate a unique landscape.
The urgency of addressing AI’s legal risks to firms and solicitors in Northern Ireland stems from its rapid adoption by many of our clients. As we navigate AI adoption, our work in the best interests of our clients is two-fold: firstly, we must have a clear understanding of the risks of AI; and, secondly, we must use our professional skill and curiosity to explore how we can deliver a client service which is enriched by AI. While AI offers efficiency and innovation, it also introduces new liabilities, ethical dilemmas, and regulatory challenges.
'Solicitors must be proactive in understanding these risks to safeguard their clients, their firms, and the integrity of the justice system.'
Risks for lawyers using AI
Solicitors who integrate AI into their practice face a range of professional risks. One of the most pressing concerns is liability. If an AI tool produces incorrect or misleading legal advice, the solicitor remains responsible for the outcome. Courts and clients are unlikely to accept the argument that the error was the fault of the technology. This places a heavy burden on legal professionals to, firstly, understand how to use AI appropriately, and, secondly, to have a human process to verify and validate AI-generated outputs.
Confidentiality is another critical issue and, for solicitors in Northern Ireland, there are two crucial aspects to this:
• Compliance with clients’ GDPR rights and solicitors’ GDPR obligations
• Confidentiality as a fundamental part, and foundation of, privilege.
Many AI systems require access to large datasets, which may include sensitive client information. Without proper safeguards, there is a risk of breaching legal privilege or violating data protection laws such as the UK GDPR. Firms must ensure that any AI tools they use comply with strict confidentiality standards and are not inadvertently exposing client data to third parties or insecure platforms. Using open-access or public AI should be approached with the greatest caution by solicitors.
The question of authorship and accountability also arises when AI is used to generate legal advice. If a chatbot or automated system provides guidance that leads to a negative outcome, determining who is responsible becomes complex. Is it the developer, the firm, or the individual solicitor who deployed the tool? While this area remains largely un-litigated, and the legal risks are unclear; the uncertainty and potential exposure to claims of negligence or misconduct are clear.
Risks for clients using AI
Clients are increasingly turning to AI-powered platforms to handle legal tasks, from drafting contracts to contesting fines. While these tools can offer convenience and cost savings, they are not substitutes for qualified legal advice. Misuse of AI can result in flawed decisions, unenforceable documents, or missed deadlines, all of which carry serious legal consequences. A particular risk, which will be apparent to any solicitor in any practice area who has used AI, is that, very often, AI-generated output contains references to laws outside of this jurisdiction which do not apply here. In a number of areas, due to large amounts of data in England and Wales on a point and little Northern Ireland data on the same point, a response to a legal question (even if the user prompts that it is an NI legal problem), English and Welsh-only legislation will “leak” into the response. These responses may sound compelling to the lay reader, but be very obviously and immediately wrong to a solicitor.
Relying on non-human advice also presents risks. AI lacks the contextual understanding and ethical judgement of a human solicitor. It may offer technically correct solutions that are practically harmful or fail to consider the broader implications of a legal strategy. Clients who depend solely on AI may find themselves in difficult situations without the support of a professional who can navigate the nuances of the law.
Solicitors in many, if not all, practice areas, will have to consider how AI can be used as a form of generative harm. Deepfake technology and AI-generated misinformation pose growing threats to legal proceedings. From forged evidence to impersonated communications, the potential for fraud is escalating. As AI outputs become more sophisticated, solicitors must be prepared to detect and counteract these threats, and clients must be educated about the dangers of using unverified AI tools.
Regulatory and compliance challenges
Northern Ireland faces a particularly complex regulatory environment when it comes to AI. The UK government has adopted a light-touch, pro-innovation approach to AI regulation, which leaves significant gaps in areas such as liability, transparency, and ethical standards. However, under the Windsor Framework, Northern Ireland remains aligned with certain aspects of EU law, including, to some extent, the EU AI Act. The EU AI Act introduces a tiered, risk-based framework that bans certain high-risk applications and imposes strict obligations on others. In practical terms, solicitors may prudently want to consider:
Conducting an inventory of AI products being used by the business and their employees (and clients);
Auditing the use of AI in their business and supply chain, mapping how the data is used, where it is hosted and how it is handled; and,
Reviewing and updating contracts, documentation and policies.
Currently, the Law Society of Northern Ireland is working on guidance on AI use and will use this as an opportunity to engage with the membership about what their AI-related needs are. This will include providing Continuing Professional Development (CPD) training and a clear regulatory roadmap tailored to the needs as expressed through membership and stakeholder feedback.
Access to justice and ethical implications
AI has the potential to improve access to justice by reducing costs and increasing availability. Automated platforms and chatbots can help individuals navigate legal issues without the need for expensive consultations. This could be particularly beneficial in areas with limited Legal Aid or rural communities where solicitors are scarce. This could also be a false economy: as noted above, the vast amount of data on certain issues outside the jurisdiction, compared to the volume within the jurisdiction, may result in flawed advice (which may appear compelling to the lay reader).
However, the benefits of AI are not evenly distributed. Access to technology and digital literacy vary widely, and those who lack these resources may be left behind. There is a risk that AI could deepen existing inequalities rather than alleviate them.
Legal professionals must be mindful of these disparities and work to ensure that AI tools are inclusive and accessible.
Bias in AI systems is another serious concern. If an AI tool is trained on biased data, it may produce discriminatory outcomes. In legal contexts, this could affect decisions in areas such as employment, immigration, or criminal justice. Solicitors must ensure that the AI tools they use are audited for bias and comply with equality legislation.
Ultimately, solicitors have a duty to uphold justice and protect client interests. This includes scrutinising the ethical implications of AI and advocating for transparency, fairness, and accountability for everyone. The legal profession must play a central role in shaping how AI is used and regulated.
The risk of doing nothing: ghost adoption of AI
One of the most insidious risks is the informal or unregulated use of AI tools within legal practices. Firms may already be using AI without formal policies or oversight. Staff may rely on tools like ChatGPT for drafting or research, unaware of the potential risks. This “ghost adoption” can lead to inconsistent quality, ethical breaches, and legal exposure.
Without proper oversight, firms risk losing control over how AI is used. This can result in data leaks, flawed advice, or reputational damage. Internal audits, staff training, and governance frameworks are essential to ensure that AI is used responsibly and effectively.
Real-world examples have already highlighted the dangers. In one high-profile case (Frederick Ayinde -v- The London Borough of Haringey, [2025] EWHC 1040 (Admin)), a solicitor and barrister submitted, to court, case law which did not exist and may have been generated by an AI tool. Such incidents underscore the need for clear internal policies, professional standards, and a culture of accountability. While some solicitors may not be using AI themselves, it is prudent to consider whether clients are using AI. A stark example of this was another, similarly high-profile case (Al Haroun v Qatar National Bank QPSC & Anor [2025] EWHC 1588 (Comm)) earlier this year in London, where a client provided AI-generated content to their legal team which was submitted to the court, where the Judge hearing the case did not recognise one of her quotations or the case cited. Both were considered by the High Court, which, using its Hamid jurisdiction, reiterated that the ‘administration of justice depends upon the court being able to rely without question on the integrity of those who appear before it and on their professionalism in only making submissions which can properly be supported.’ The subsequent judgment lists the potential professional ramifications for solicitors and barristers misusing AI. These are grouped into eight categories and are sobering reading.
The AI Guidance for Judicial Office Holders in England and Wales is a short and accessible guide, which will be useful to solicitors, pending Northern Ireland specific guidance. The Institute of Professional Legal Studies, Queen’s University, Belfast, now delivers lectures and tutorials which consider AI risk as part of the Professional Skills module.
A call for awareness and action
AI is transforming the legal landscape, and practitioners must respond with urgency and foresight. The risks are real and multifaceted, affecting lawyers, clients, and the justice system as a whole. From professional liability and data protection issues to ethical dilemmas and climate impact concerns, the challenges are complex but not insurmountable.
In understanding the risks of AI, solicitors can properly and fully embrace this exciting innovation while safeguarding its core values of justice, integrity, and public trust.
The Law Society of Northern Ireland has a vital role and will be providing guidance, education, and advocacy around AI, to ensure that it is used responsibly, ethically and to the benefit of the profession and our clients. Member feedback is particularly welcome and will enrich the Society’s approach.
Jude Copeland leads the Legal Technology Group in Cleaver Fulton Rankin and serves in the Law Society of Northern Ireland as Chair of the Law Tech Sub Group with responsibility for AI, and as a member of the Future of the Profession Committee. Jude has delivered lectures on AI at Ulster University and the IPLS and has spoken about legal tech and AI at various conferences in the UK and Ireland.
To contact Jude, please email J.Copeland@cfrlaw.co.uk or connect on Linkedin https://www.linkedin.com/in/ judecopeland/
Jude Copeland, Legal Review Manager & Associate Cleaver Fulton Rankin