When Big Four accounting firm EY tried out an artificial intelligence system trained at recognising fraud on the accounts of some of its UK audit clients earlier this year, the results were striking.
According to Kath Barrow EY’s UK and Ireland assurance managing partner, the new system detected suspicious activity at two of the first 10 companies checked. The clients subsequently confirmed that both cases had been frauds.
This early success illustrates why some in the industry believe AI has great potential to improve audit quality and reduce workloads. The ability of AI-powered systems to ingest and analyse vast quantities of data could, they hope, provide a powerful new tool for alerting auditors to signs of wrongdoing and other problems.
Yet auditors disagree sharply about how far they can rely on a technology that has not yet been widely tested and is often poorly understood.
Some audit firms are sceptical that AI systems can be fed enough high-quality information to detect the multiple different potential forms of fraud reliably. There are also some concerns about data privacy, if auditors are using confidential client information to develop AI.
The questions mean there are clear differences in approach between the UK’s big audit firms. While EY declined to reveal the details of its software or the nature of the frauds it had discovered, Barrow said the results suggested the technology had “legs” for auditing.
“That feels like something we should be developing or exploring,” she said.
However, Simon Stephens AI lead for audit and assurance at the UK business of Deloitte, another of the Big Four audit firms, pointed out that frauds were relatively rare and tended to differ from each other. That would mean there were not necessarily telltale patterns for AI systems to pick up.
“Frauds are...unique and each is perpetrated in a slightly different way,” Stephens said. “By nature, they are designed to circumvent safeguards through novel uses of technology or exploiting new weaknesses and AI doesn’t play well there right now.”
Regulators are likely to have the final say over how the technology can be deployed. Jason Bradley head of assurance technology for the UK’s Financial Reporting Council, the audit watchdog, said AI presented opportunities to “support improved audit quality and efficiency” if used appropriately.
But he warned that firms would need the expertise to ensure systems worked to the right standards. “As AI usage grows, auditors must have the skills to critique AI systems, ensuring the use of outputs is accurate and that they are able to deploy tools in a standards-compliant manner,” he said.
While traditional audit software must be told which data patterns indicate fraud or other problems, AI systems are trained to spot issues using machine learning and data from multiple past known cases of misconduct. Over time they should become better at doing so as they accumulate experience.
The technology could be particularly helpful if it reduces auditor workloads. Firms across the world are struggling to train and recruit staff. It could also help raise standards: in recent years auditors have missed serious financial problems that have caused the collapse of businesses including outsourcer Carillion, retailer BHS and cafe chain Patisserie Valerie.
EY’s experiment, according to Barrow, used a machine-learning tool that had been trained on “lots and lots of fraud schemes”, drawn from both publicly available information and past cases where the firm had been involved. While existing, widely used software looks for suspicious transactions, EY said its AI-assisted system was more sophisticated. It has been trained to look for the transactions typically used to cover up frauds, as well as the suspicious transactions themselves. It detected the two fraud schemes at the 10 initial trial clients because there had been similar patterns in the training data, the firm said.
“All it’s doing is saying: This is something you should explore further,” Barrow said of the AI system, which she described as a “co-pilot” for auditors. “It focuses our efforts to understand more.”
Yet other firms doubt that AI systems are clever enough to detect sophisticated frauds. KPMG UK, another Big Four auditor, echoed the concerns of Stephens at Deloitte.
“Fraud by its nature is unpredictable and therefore using known fraud cases to train machine learning models is challenging,” KPMG said.
Stephens acknowledged that the technology had its uses in auditing. But he saw a far more limited role for it. “AI can automate some of the more mundane, repeatable tasks and allows our auditors to focus on the areas of greatest risk,” he said.
Deloitte currently restricts the use of AI to less complex tasks, providing clear instructions on what kinds of anomalies to look for in company accounts.
One issue, Stephens said, was that a company might regard its detailed financial data as proprietary information. That would make it difficult to use that private information to train a system that subsequently audited another company.
“Anyone developing AI has to be cognisant of that,” he said.
Barrow acknowledged there were challenges. She said it was vital for auditors to understand how the AI system’s coding worked, the real meaning of the results it produced and the nature of the data that had been used to train it.
“We need to supplement it with ... applying that auditor lens of scepticism, so that we can be clear it’s fit for purpose,” she said.
She also recognised the issue around using proprietary corporate information to train AI systems. But she said there was enough publicly available information to supplement EY’s casework and provide meaningful training for the firm’s own AI systems.
“Technology is already applied in quite a big way to help us with risk assessment, with risk identification,” Barrow said. “AI will be increasingly another tool at our disposal to do that.” – Copyright The Financial Times Limited 2023