A voice note from your teenager daughter tells you that she urgently needs money transferred to a bank account. It sounds remarkably like her, in accent, vocabulary and tone, but it isn’t. It’s just one of the latest manifestations of how AI is being used by bad actors to scam unsuspecting consumers. A voice sample of just a few seconds, harvested by a scammer, can be used for such nefarious purposes.
For businesses, the stakes can be much higher and the losses much greater. The powerful capacity of generative AI to perform complex business tasks at scale is a double-edged sword. As the operation of the financial services sector is underpinned by analysis of numbers and text, large language models can be exploited by criminals, particularly using the more modern applications that understand nuance and context. It’s a race between good and evil.
FraudGPT, for example, is a variation of ChatGPT which can be purchased on the dark web. It creates content to facilitate cyberattacks. A researcher from the data analytics firm Netenrich purchased and tested the application last year.
A test prompt from the researcher asked the tool to create bank-related phishing emails. Users merely needed to format their questions to include the bank’s name and FraudGPT could do the rest. It suggested where in the content people should insert a malicious link. The application was able to create fake landing pages where visitors were encouraged to post information and it boasted in an ad on the dark web that it could create malicious code, build undetectable malware, detect vulnerabilities and identify targets.
The secret to cooking a delicious, fuss free Christmas turkey? You just need a little help
How LEO Digital for Business is helping to boost small business competitiveness
‘I have to believe that this situation is not forever’: stress mounts in homeless parents and children living in claustrophobic one-room accommodation
Unlocking the potential of your small business
For those guarding against fraud, detection involves analysing vast quantities of data that needs to verified instantaneously. Fraudsters have sophisticated techniques and can learn ways to outsmart fraud detection systems. One way is by unleashing complex chains of transactions to avoid notice. This is where traditional rules-based systems can miss patterns and fail.
Niall Mackey of TopSec Cloud Solutions says AI tools have also turbocharged the level of cyberattacks in recent years. The capacity of bad actors to customise and target their attacks is one of the more worrying aspects to emerge, he says.
Email is the gateway for 90 per cent of all exploits, says Mackey. As with ChatGPT, fluency of communication is also enabled by these rogue programmes, making many of the scams more convincing. Badly worded emails with spelling and grammar mistakes are now less common, for example.
People say that they don’t pay ransoms but many do because if they didn’t you wouldn’t continue to see such high levels of attacks
— Niall Mackey, TopSec Cloud Solutions
While it is hard to quantify the level of attacks in Ireland, it is generally accepted that they are on the rise, with SMEs being the subject of a greater level of vulnerability as large corporates are more likely to have the resources and expertise to deal with a cyberattack.
An international study by GuidePoint Security last autumn revealed an 83 per cent year-on-year increase in reported ransomware attacks, with the manufacturing and technology industries most impacted, followed by retail and wholesale trades.
“People say that they don’t pay ransoms but many do because if they didn’t you wouldn’t continue to see such high levels of attacks,” Mackey says. “If someone has been compromised once, there’s a strong chance it will happen again as the hackers will have left a back door open. Unfortunately, this often means that data needs to be wiped.”
Continuous investment in security, adopting practices such as two-factor identification and increasing cybersecurity awareness throughout the organisation can all help mitigate threats – but breaches can have catastrophic effects.
UK Logistics firm KNP cited a ransomware attack as a key factor that drove it into administration last year, with the loss of 730 jobs. The 158-year old haulage firm was reportedly already struggling before the incident but the attack damaged key systems and data, preventing it from securing urgent new investment and forcing its collapse, according to its owners.
On a more positive note, Geraldine Magnier of Idiro Analytics says AI is helping financial institutions improve the management of fraud risk.
“Banks are becoming more and more sophisticated in this area and detecting where risk lies and where it doesn’t,” says Magnier. “The days of your credit card being blocked when you go on holidays is over. The emphasis now is very much on behavioural biometrics, looking at the patterns of how people use their devices.”
The capacity of AI-powered fraud detection systems to detect complex criminal activities system is resulting in criminals turning their attention away from corporates towards smaller prey, including consumers, she observes. Education and vigilance are key elements here.
“Humans are more open to be tricked but they are also open to being made more aware,” she adds.