Humanoid robot working on laptop studying financial charts, computer concept illustration
Katerina Conn/Science Photo Library | Science Photo Library | Getty Images
More than one in four companies now prohibits employees from using generated AI. However, this does little to protect you from criminals who trick your employees into sharing sensitive information or paying fraudulent invoices.
With ChatGPT or its dark web counterpart FraudGPT, criminals can use their voices and images to create convincing deepfakes of income statements, fake IDs, false identities, and even company executives. Easily create realistic videos.
The statistics are sobering. In a recent survey by the Association of Financial Professionals, 65% of respondents said their organization was the victim of attempted or actual payment fraud in 2022. Of those who suffered a loss, 71% were compromised through email. Research shows that large organizations with annual revenue of $1 billion are most susceptible to email fraud.
The most common email scams include phishing emails. These scam emails resemble trusted sources such as Chase or eBay and ask people to click on a link that leads to a fake, but convincing-looking site. It asks potential victims to log in and provide personal information. Once a criminal has this information, they can access your bank account or commit identity theft.
Spear phishing is similar, but more targeted. Instead of sending general emails, emails are addressed to individuals or specific organizations. Criminals may have researched job titles, names of co-workers, and even names of bosses or managers.
Old-fashioned scams are getting bigger and more sophisticated
Of course, these scams aren't new, but generative AI makes it harder to tell what's real and what's fake. Until recently, wonky fonts, odd writing styles, and grammar mistakes were easy to spot. Now, criminals anywhere in the world can use her ChatGPT or FraudGPT to craft convincing phishing and spear-phishing emails. You can also impersonate a company's CEO or other administrator by taking over their voice on a fake phone call or taking over their image on a video call.
That's what happened recently when a finance employee in Hong Kong thought he had received a message from the UK-based company's chief financial officer asking him to transfer $25.6 million. Initially suspecting it was a phishing email, the employee's concerns were allayed after a video call with the CFO and a colleague he knew. As it turns out, everyone on the call was deepfaked. It was only after checking with headquarters that he discovered the deception. However, by that time the money had been transferred.
“The effort to make these things reliable is actually pretty impressive,” said Christopher Budd, a director at cybersecurity firm Sophos.
Recent high-profile deepfakes involving celebrities show how rapidly technology has evolved. Last summer, a fake investment scheme revealed a deepfaked Elon Musk promoting a platform that doesn't exist. There was also a deepfake video of CBS news anchor Gayle King. Former Fox News host Tucker Carlson and talk show host Bill Maher reportedly spoke about Musk's new investment platform. These videos will be circulated on social platforms such as TikTok, Facebook, and YouTube.
“It's becoming increasingly easy for people to create synthetic identities, either stolen information or fabricated using generative AI,” said Andrew Davis, global head of regulation at regulatory technology firm ComplyAdvantage. It's about making use of it.''
“There is a wealth of information available online that criminals can use to create highly realistic phishing emails,” said Cyril Noel-Tagoe, principal security researcher at Netcea. “They will be trained in the company and gain knowledge about the company, the CEO, and the CFO.” A cybersecurity company focused on automated threats.
Big companies in crisis in the world of APIs and payment apps
While generative AI increases the credibility of threats, the proliferation of automation and financial transaction websites and apps is increasing the scale of the problem.
“One of the real drivers of the evolution of fraud and financial crime in general is the transformation of financial services,” Davis said. Just 10 years ago, there were few ways to move money electronically. Most of them involved traditional banks. The explosion of payment solutions like PayPal, Zelle, Venmo, and Wise has widened the playing field and given criminals more places to attack. Traditional banks increasingly use APIs (application programming interfaces) to connect apps and platforms, which also represent a potential attack point.
Criminals use generative AI to quickly craft reliable messages and use automation to scale. “It's a numbers game. If you run 1,000 spear-phishing emails or fraud attacks against CEOs and one in 10 is successful, that could be millions of dollars,” Davis said. said.
According to Netcea, 22% of businesses surveyed said they were attacked by fake account creation bots. In the financial services industry, this rose to 27%. Of the companies that detected automated attacks by bots, 99% said the number of attacks will increase in 2022. Larger companies were most likely to see the biggest increases, with 66% of companies worth $5 billion or more seeing an increase in the number of attacks. Revenues reported a “significant” or “moderate” increase. Additionally, while all industries say they experience fake account registrations, the financial services industry is the most targeted, with 30% of financial services companies claiming 6-10% of new accounts are fake. I received it.
The financial industry is using a unique generational AI model to combat generational AI-powered fraud. Mastercard recently announced that it has built a new AI model to help detect fraudulent transactions by identifying “mule accounts” used by criminals to move stolen funds.
Criminals are increasingly using impersonation tactics to trick victims into believing the money transfer is legitimate and going to a real person or company. “Banks are finding these frauds extremely difficult to detect,” Ajay Birla, Mastercard's president of cyber intelligence, said in a July statement. “Customers pass all the required checks and send the money themselves. Criminals don't have to breach any security measures,” he said. Mastercard estimates that its algorithm will help banks save money by reducing the costs banks would normally incur in rooting out fake transactions.
More detailed identity analysis required
Some particularly motivated attackers may have inside information. Noel Tagoe added that while criminals have become “very sophisticated,” “they will never know exactly what's inside a company.”
While it may not be possible to immediately know whether a transfer request from a CEO or CFO is legitimate, employees can find ways to find out. Noel Tagoe said companies need to have specific procedures in place for remittances. So, if your normal channel for your money transfer request is through your billing platform rather than email or his Slack, find another way to contact him and confirm.
Another way companies are trying to differentiate between real and deepfaked identities is by using more detailed authentication processes. Digital ID companies now often ask for ID and perhaps a real-time selfie as part of the process. Companies will soon be able to ask people to blink, say their name, or perform other actions to differentiate real-time video from pre-recorded ones.
It will take time for companies to adapt, but for now cybersecurity experts say generative AI is leading to a surge in highly convincing financial fraud. “I've been in the technology industry for 25 years at this point, and this enhancement from AI is like pouring jet fuel on a fire,” said Sophos' Bud. “That's something I've never seen before.”