5 minute read

ARTIFICIAL INTELLIGENCE (AI) IN BUSINESS THE EXPERTS’ VIEW

Sam Rourke Strategic Growth Director Paramount Digital

How can businesses keep up with the speed of AI innovation?

In digital marketing, the question about AI is more how the existing workforce will adapt, rather than if it’s going to replace jobs.

AI could slow the pace of job growth, because it’s removing inefficiencies, especially around reporting and manual tasks.

But, personally, I believe it’s unlikely AI will replace marketing jobs like-for-like, provided your average digital marketer can embrace change and understand how to use AI in their roles.

Content writing is a prime example.

The early hype about ChatGPT was that it can be used for writing blogs and other content, so was it going to take the job of writers?

But when you look at it, the basic content ChatGPT produces is generic, often inaccurate, and usually filled with cliches and phrases people don’t use. And that’s before you get to the copyright issues. So, AI won’t replace writers. But, if you learn how to use it, it’s a good tool for making the writing process easier. And it’s the same in most digital marketing roles.

In paid ads, it’s not about manual bid management anymore. AI takes on a lot of that. It’s more about sales and impact consultancy, tracking how AI performs and keeping algorithms in check.

I think AI is going to change jobs in digital marketing, but it’s not going to take them.

The best thing you can do is test the AI. It’s not perfect, so it’s about finding out what it can do, where you can push it, and where it’s most limited.

I’d also definitely recommend staying close to the industry leaders, like Google, Amazon, Microsoft and ChatGPT.

There’s a battle going on to be the leader in AI because it’s so new and no-one really knows where this is all going, so stay on top of the conversations going on. The worst thing you can do is just blindly trust the output of an AI. You have to sense check the information it gives you. We’ve all heard that AI can “hallucinate” so you can’t take it for granted that its outputs are correct.

Lisa Johnson Founder & Managing Director SquareOne Training

Should AI literacy be a core competency for all employees?

Our short answer: yes.

Even if your company isn’t fully leveraging AI yet, there’s a good chance some of your team already are - whether for productivity, automation, or creative tasks. And with that, comes benefits, but more importantly - risks.

That’s exactly why AI literacy matters. In many businesses, staff who are naturally curious or tech-savvy will start experimenting with AI tools on their own

- but without guidance, this can lead to inconsistent processes, inefficiencies, or even unfair advantages. It’s up to leadership to ensure AI is used correctly, fairly and efficiently across the business.

Whilst AI can be a divisive topic, its potential to transform businesses is undeniable - and change seems inevitable. From streamlining processes and analysing data to enhancing customer experiences and sparking new ideas, the opportunities are vast.

Our recommendation?

Embrace AI now and get your business ahead of the curve. While the world of AI can seem overwhelming to those unfamiliar with it, top-down training can empower your team with the skills and confidence to use it effectively.

Start by choosing your preferred platformwhether that’s ChatGPT, Microsoft Copilot, or something else - then build structure around it. Clear policies and tailored training will not only create consistency across your team but also help embed smart habits and techniques that make everyday tasks more efficient.

Investing in AI literacy early reduces future stress and sets your team up for long-term success. The digital world is constantly evolving, and while change can be daunting, the businesses that invest in learning and continuous development are the ones that stay ahead.

At SquareOne, we’re already supporting organisations through this shift - and we believe our region is well placed to lead the way when it comes to building AI literacy in the workplace.

AI won’t replace writers. But, if you learn how to use it, it’s a good tool for making the writing process easier. And it’s the same in most digital marketing roles.

Businesses should be required to disclose when AI is used but only for customer interactions. This in turn, will promote transparency, build trust, and allow customers to make informed decisions about their interactions, especially when dealing with sensitive issues where human empathy is paramount.

How do you ensure ethical use of AI in decision-making processes?

AI systems must be trained on nonprejudiced data. Otherwise, AI systems can perpetuate or amplify existing societal biases. Use diverse and representative datasets to train AI systems, actively identifying and mitigating biases present in historical data. The AI user and developer in my opinion must always be accountable for the decision-making processes developed and used. AI systems should treat all individuals equitably, avoid perpetuating biases, and promote social justice. This requires meticulous examination of training data and algorithmic design. It is very important for developers to regularly audit AI systems to ensure they operate as intended, adhere to ethical standards, and comply with evolving regulations. Who is accountable when AI systems make mistakes or cause harm?

Accountability for AI mistakes is complex but should fall on the developer, deployer or user of the AI system in my opinion, depending on the specific cause of the mistake and the regulatory context.

Should businesses be required to disclose when AI is used in customer interactions?

In my opinion in most cases, yes, businesses should be required to disclose when AI is used but only for customer interactions. This in turn, will promote transparency, build trust, and allow customers to make informed decisions about their interactions, especially when dealing with sensitive issues where human empathy is paramount. While not always legally mandated currently, regulations are evolving (e.g., EU AI Act).

Regulating the Revolution: Civil Justice Council tackles AI in court documents

In Well Connected’s last Autumn Edition, Taylor Wessing considered the interaction between AI and legal privilege, concluding that the courts have been clear: extending legal advice privilege to advice given by non-lawyers / a computer program is Parliament’s remit.

In addition to fundamental questions of privilege and confidentiality, the use of AI in preparing court documents raises various issues, particularly in respect of accuracy and hallucination, threatening the integrity of legal proceedings. Following recent incidents where lawyers have narrowly escaped contempt of court over AI-generated fake case citations, the Civil Justice Council (“CJC”) has established a working group to examine the use of AI when preparing court documents.

Chaired by Lord Justice Birss, the working group will produce a consultation paper followed by a final report, seeking to address whether “rules are needed to govern the use of AI by legal representatives for the preparation of court documents, including pleadings, witness statements, and expert reports”.

Cautionary Tales from the Courtroom

The cases of Hamad Al-Haroun v Qatar National Bank and Frederick Ayinde v The London Borough of Haringey serve as stark warnings. In the former, solicitor Abid Hussain admitted relying on unverified legal research, which the court described as a “lamentable failure to comply with the basic requirement to check the accuracy of material put before the court”. The latter involved a pupil barrister who avoided contempt proceedings partly due to her junior status, though both practitioners were referred to their respective regulators.

LJ Birss emphasised that lawyers must take “personal responsibility for what goes in your name”. He illustrated this with AI document summarisation, noting that whilst AI can assist, lawyers must still read the original documents themselves.

The Path Forward

These developments highlight the need for regulatory clarity where technology has outpaced existing frameworks.

The CJC working group represents a crucial first step, with LJ Birss commenting “I suspect we’ll need some adjustment [to court rules]”.

When used appropriately, AI can enhance efficiency in document review, legal research, and routine drafting. However, this technological advancement must be balanced with robust safeguards. The integrity of the legal profession depends on maintaining professional accountability in all AI-assisted work.

Should you require advice on the issues raised, Tom Charnley and Megan Howarth, members of Taylor Wessing’s specialist disputes and investigations team working within the Liverpool office, would be happy to discuss this with you.

t.charnley@taylorwessing.com m.howarth@taylorwessing.com

This article is from: