AI ‘deep fake’ voices take scams to unheard of levels
Criminals are taking advantage of AI-assisted financial technology to trick vulnerable families.
Scammers are using the latest AI technology to impersonate companies or even family members, according to a report published on Monday by intelligence firm Recorded Future.
The new AI scamming threatens to worsen an epidemic across Australia, with the official data showing losses to scammers ballooned to more than $500 million in 2022.
Alexander Leslie, an analyst for Recorded Future who prepared the research, said AI is rapidly making it easier for scammers to fool unsuspecting consumers and even professionals.
AI is being used to “deep fake” voices that Australians trust, allowing criminals to impersonate companies, government agencies and potentially even loved ones, he said.
“The rise of deep fakes is a great example of this, and what we’re seeing now is that more Australians are falling victims of these elaborate schemes,” he said on Monday.
“The outlook for voice cloning and its use in particular in banking fraud, disinformation, social engineering, copyright infringement, and more is bleak if we do not immediately adopt an industry-wide approach to mitigating associated risks.”
AI scams on the rise
In one example given by Recorded Future, scammers have used AI voice-cloning technology to turn voice samples into elaborate tricks designed to access people’s finances.
With just a one-minute voice sample, technology available on the market allows criminals to effectively mimic a target’s family members, or other trusted figures in someone’s life.
“A family emergency scam is a type of fraud where the scammer poses as a family member or friend in need of urgent financial assistance due to an emergency,” the report said.
“Scammers can also involve fake authority figures, such as a law enforcement officer, lawyer or doctor, to make the lure more convincing and scare the victim.”
In another example explored by Recorded Future, scammers used deep-fake voice clips to get past authentication and security measures at major banks or impersonate trusted financial institutions to attack Australians that way.
Mohiuddin Ahmed, a senior lecturer at Edith Cowan University, said recent developments in AI technology, including the advent of services like ChatGPT and Google’s Bard, are helping cyber criminals to launch “more sophisticated attacks” at Australians.
“AI-assisted scams are another form of social engineering and disinformation attack,” he said.
But he said AI can be a double-edged sword for scammers, with such technology also providing an avenue to help catch criminals.
Those solutions were still being developed, he said.
Protective measures
Australians should be wary, he said, and take steps to guard against fraudsters.
“Australians should embrace multi-factor authentication more than ever,” he said.
“In addition, emotional intelligence is another crucial ingredient in fighting these newer variants of scams.”
Mr Leslie said there’s also a role for business and government to get ahead of the latest trends to protect consumers, with organisations needing to act urgently.
“Mitigation strategies need to be multidisciplinary,” he said.
“Adopting a framework that educates employees, users and consumers will be more effective in the short term than fighting abuse of the technology itself, which should be a long-term strategic goal.”