8 minute read

When the numbers talk back

Dr. Faisal Sheikh and Mark Ingram discuss how AI will impact the future of accounting. So will the machines take over?

The accounting profession is rules based: derived from double-entry systems with compliance checklists, audit trails and IFRS frameworks. It is firmly embedded in the laws of all modern states.

But AI, particularly in its most recent forms, is not about ‘playing by the rules’. It learns. It adapts. Sometimes it appears to hallucinate. But most of all it uncovers patterns faster than most of us can imagine.

So what if a spreadsheet can think and talk back? What if it can offer forecasts, interpreting legal codes, learn our behavioural habits and perhaps, even sooner than we think, refuse our inputs and challenge our conclusions?

Whilst we are at an accounting turning point, this is not for the first time.

The introduction of the spreadsheet and computerised accounting systems in the 1980s changed accounting. Back then, it was data entry clerks who felt the heat. Today, it could be middletier auditors, analysts, even financial controllers. Similarly in the 1980s and 1990s the replacement of local bank managers, and their local human knowledge, with centralised bank lending algorithms arguably contributed to the US subprime mortgage proliferation and the subsequent world financial crisis.

Yet this change will be big. In 2022 the World Economic Forum estimated that by now, 2025, over 50% of all accounting-related job tasks would be performed by AI or algorithms.

Who is using AI?

From KPMG’s Clara platform to Deloitte’s use of machine learning in fraud detection, AI is already woven into the accounting mainstream. EY’s Helix platform processes massive volumes of structured and unstructured data during audits. PwC has deployed natural language processing (NLP) tools to comb through contracts and financial statements in real time.

Even outside the Big 4, startups such as MindBridge Ai are offering risk-scoring engines that help accountants flag anomalies before the human brain would even notice them. And, perhaps closer to home, many are familiar with Xero, QuickBooks and Avalara offering real-time reconciliations, automated expense classification and multi-jurisdictional compliance at a fraction of the traditional cost.

This isn’t automation in the narrow sense of eliminating repetitive tasks. It is ‘cognitive outsourcing’ – a shift in who (or what) makes decisions. That doesn’t mean such tasks go unchecked, but that the accountants task changes from performing a task to managing those tasks, checking their reasonableness and interpreting their results. This is essential because, as the spreadsheet taught us: “To err is human, but to really stuff up requires a computer.”

What’s next?

What happens when AI systems make predictions or decisions we cannot easily explain or check, yet we trust them because they outperform us? We call this the displacement of interpretive authority.

This is where the theory of Technological Determinism (McLuhan, 1964) becomes relevant. If left unchecked, tools begin to shape social structures and human behaviour more than humans shape tools. AI doesn’t just change the how of accounting – it risks redefining the why and who.

In 2023 an AI tool at a mid-sized UK firm flagged multiple ‘non-compliant’ transactions by a high-performing manager. The tool’s algorithm weighted certain wordings in email communications as ‘suspicious’, despite being colloquial and contextually innocent. The manager was placed on suspension before a human had even reviewed the claims. This echoes the recent Post Office Scandal. It is not just a technical failure; it is a philosophical one. Should AI make key decisions unchecked?

ACCA and the ethical mandate

The ethical principle of objectivity means to exercise professional or business judgment without being compromised by:

• Bias

• Conflict of interest, or

• Undue influence of, or undue reliance on, individuals, organisations, technology or other factors.

In their 2024 report ‘Ethics for a Digital Age’,

ACCA reaffirm that professional accountants must go beyond technical competence, they must develop what ACCA terms ‘ethical tech fluency’: the ability to interrogate, challenge and contextualise digital outputs.

Among their recommendations:

• Redesign CPD: Continuous professional development must now include data ethics, AI bias and algorithmic transparency, not just simply IFRS updates.

• AI Governance Codes: ACCA members should be equipped to participate in AI oversight frameworks, both within firms and across sectors.

• Digital scepticism: A mindset that balances openness to innovation with cautious interrogation (desperately lacking during the UK Post Office scandal!).

These ideas resonate with Giddens’ Structuration Theory, which holds that human agency and structure must co-exist in a feedback loop. Accountants, as agents, must shape the digital structures they operate within, not simply comply with them.

Preparing for the post-human accountant?

For employers, the question is not whether to adopt AI, but how to integrate it without eroding trust, judgement or wellbeing. A PwC 2023 internal survey found that 48% of junior accountants were already experiencing

‘automation anxiety’, with many unsure whether to upskill or exit the profession altogether. Here are some concrete strategies firms can consider:

1. Create AI-accountability committees: Just as audit committees provide checks and balances, dedicated AI-accountability committees should review how algorithms are used in financial decision-making. These should likely include human professionals from multiple disciplines including finance, ethics, law and tech.

2. Invest in ‘human-AI collaboration’ training: AI should be likened to a cognitive companion that requires oversight. Training in this skill might include scenario-based workshops where AI outputs are contested, interpreted, or rejected. ‘Rubbish in, rubbish out’ applies to AI too!

3. Rethink recruitment: The accounting profession has traditionally prized attention to detail, client confidentiality and rule-following. But in an AI-rich future, judgement, curiosity and ethical awareness and resilience may matter more. Firms must look for ‘AI-resilient accountants’: those who can challenge outputs and temper AI’s ethically agnostic and potentially flawed conclusions with reasonableness tests and human ethical values.

4. Mental health matters: If decisions about job security are being shaped by AI predictions, the psychological safety of employees becomes

central. Transparent policies, appeal mechanisms and human-led discussions must be enshrined in company culture.

Case study: the Danish data dilemma

In 2024, a Danish logistics firm implemented a new AI-based financial forecasting model. Within six months it had cut reporting times by 60% and improved forecast accuracy. But an internal review revealed that no one in the finance department could fully explain how the model derived its assumptions.

When auditors flagged a potential breach in compliance (related to revenue recognition), the finance director responded: “We trust the model more than we trust our own assumptions now.”

This led to a regulatory inquiry: not because the model was wrong, but because its opacity violated principles of transparency and explainability, tenets central to the ethical practice of accounting, as shown in the ACCA ethical code mentioned above.

(By the way, Mark is uncomfortable with how Xero calculates FX gains and losses… he should dig deeper!)

What if AI out-thinks us?

We are inching toward a moment when artificial general intelligence (AGI) may outperform human accountants not just in speed, but in judgement. If

this threshold is crossed, what remains for the role of the accountant?

‘Humanity’ can only be displayed by humans. AI may write a flawless IFRS 15 revenue memo in seconds, but it cannot yet sit with a grieving client or co-worker, or navigate ethical grey zones with the sensitivity born of lived experience. (Indeed, not all humans appear able to demonstrate such skills!)

Philosopher Michael Polanyi reminds us in Tacit Knowledge (1966): “We know more than we can tell.” The future of accounting may depend on precisely that kind of knowledge – knowledge that is tacit, embodied and, above all, human.

For employers, this is the time to resist the ‘race to the bottom’ (and potential disaster) and instead invest in ethical, human-centred AI integration. For ACCA and other professional bodies, this is a moment to lead with wisdom.

For accountants, your role is pivoting but not disappearing. No machine can yet reliably replicate your judgement, curiosity, humanity and ethical spine.

In conclusion: a lesson from 1494?

AI is an amoral tool, not a saviour nor a saboteur. But as with any powerful tool, its impact depends on how it is wielded, and by whom. In his Summa de Arithmetica (1494), Luca Pacioli suggests: “A person should not go to sleep at night until the debits equal the credits.”

By insisting that the books be balanced before nightfall, Pacioli is suggesting a broader moral vision: that integrity in financial practice is inseparable from integrity of character. To deceive in numbers is to fracture something inward. Accounting must reconcile with both your books and your conscience.

Pacioli was writing about your ethical alignment, a kind of ‘spiritual bookkeeping’ where honesty, diligence and balance were markers of a just life. Such a worldview is fundamentally rooted in a human moral struggle – something no algorithm, however advanced, can experience or replicate.

• Dr. Faisal Sheikh is Principal Lecturer in Accounting at Nottingham Business School. CHECK Mark Ingram is a lecturer at FME Learn Online

This article is from: