3 minute read

FROM OUR PARTNERS A.I. ethics a growing concern

Next Article
FROM OUR PARTNERS

FROM OUR PARTNERS

Editor’s note: The following article was originally published on Sept. 22, 2022 by Accounting Today. It is reprinted with permission.

BY CHRIS GAETANO

Advertisement

The increased use of artificial intelligence in accounting software has brought with it growing concerns over the ethical challenges this technology creates for professionals, their clients and the public as a whole.

The past few years have seen a growing number of accounting solutions touting their use of AI for a wide range of applications, from tax planning and audits to payroll and expenses to ERP and CAS. The accounting profession spent $1.5 billion on such software in 2021 and is projected to spend $53 billion by 2030, according to a report from Acumen Research and Consulting.

Despite this rapid growth, there’s been too little attention paid to the ethical considerations that come with it, according to Aaron Harris, chief technology officer of Sage, especially when thinking about the potential money to be made.

“I have seen, in a lot of cases, that the temptation of commercial success is a louder voice than any ethical concerns,” he said.

But what exactly are those ethical concerns?

Harris said the current issues are less to do with the accidental creation of a robot overlord and more about the insertion of too-human biases into the code. He raised the example of something a lot of businesses in general, including accounting firms, use all the time now: automated resume screening. These programs, he said, are trained on alreadyexisting data to guide their decisions, much of which reflects human-created biases. If an AI is trained on biased data, then the AI will act in a biased way, reinforcing structural inequalities in the business world.

“If you’ve created an AI that parses an applicant’s resume, and makes a decision based on whether or not to proceed to an interview, if the data that you feed into that AI for training purposes disproportionately represents one ethnicity or another, or one gender … if African-American resumes, if women’s resumes, are underrepresented, the AI naturally, because of the data fed into it, will favor white males because it’s quite likely that was the bulk of the resumes that were in the training data,” he said.

Enrico Palmerino, CEO of Botkeeper, raised a similar point, saying there have already been issues with loan approval bots used by banks. Much like the resume bots, the loan bots use bank data to identify who is and is not a default risk and use that assessment to determine whether someone gets a loan. The bots identified minorities as a default risk, which wasn’t the accurate correlation but rather that bad credit or low cash on hand was the default risk - unfortunately, the bot learned the wrong correlation in that case.

“As a result of that it went on to start denying loans for people of color regardless of where they lived. It came to this conclusion and didn’t quite understand how geography tied into things. So you’ve got to worry more about that [versus accidentally creating SkyNet],” he said.

In this respect, the problem of making sure an AI is taught the right things is similar to making sure a child grows up with the right values. Sage’s Harris, though, noted that the consequences for a poorly taught AI can be much more severe.

“The difference is if you don’t raise a child right, the amount of damage that child can do is sort of contained. If you don’t raise an AI right, the opportunity to inflict harm is massive because the AI doesn’t sleep, it has endless energy. You can use AI to scan a room. An AI can look across a room of 1,000 people and very quickly identify 999 of them. If that’s used incorrectly, perhaps in law enforcement, to classify people, the AI getting people wrong can have catastrophic consequences. Whereas a person has no capacity to recognize 1,000 people,” he said. However Beena Ammanath, executive director of the global Deloitte AI Institute, noted that these bias case studies can be more nuanced than they first appear. While people strive to make AI unbiased, she noted that it can never be 100% so because it’s built by people and people are biased. It’s more a question about how much bias we’re willing to tolerate.

She pointed out that, in certain cases, bias either is not a factor at all in AI or is even a positive, as in the case of using facial recognition to unlock a phone. If the AI were completely unbiased, it wouldn’t be able to discriminate between users, defeating the purpose of the security feature. With this in mind, Ammanath said she would prefer looking at specific cases, as the technology’s use is highly context-dependent.

“So, facial recognition being used in a law enforcement scenario to tag someone as a criminal: If it’s biased, that’s probably something that should not be out in the world because we don’t want some people to be tagged that way. But facial recognition is also used to identify missing children, kidnapping victims, human trafficking victims and it is literally [used in] the exact same physical location, like a traffic light. Yes, it is biased, but it is helping us rescue 40% more children than before. If we hadn’t used it, is that acceptable or should we just completely remove that technology?” she said.

So then, rather than think of the topic in a broad philosophical sense, Ammanath said it’s more important to think about what people would actually need for AI to work effectively. One of the biggest things, she said, was trust. It’s not so much about

This article is from: