Text scammers utilize AI to steal sensitive information like passwords and Social Security numbers as scam-based losses increase
ORGANIZED scam operations have been a serious threat since before the digital age.
But with the onset of more sophisticated artificial intelligence (AI) technology, it is significantly more difficult to escape the grasp of the booming scam industry, the Federal Trade Commission (FTC) warns.
Earlier this year, FTC Chair Lina Khan reported that the agency was seeing an uptick in criminals using highly advanced AI tools to “turbocharge” fraud and scams.
“As this stuff becomes more embedded in how daily decisions are being made, I think they invite and merit a lot of scrutiny. Those problems and concerns are quite urgent, and I think enforcers, be it at the state level or the national level, are going to be acting,” Khan told Bloomberg earlier this year.
Scammers utilize more sophisticated methods of “phishing” via email, text messages, social media, and other digital forms of contacting victims.
The FTC defines phishing as “an attempt to steal the consumer’s information, personal information, financial information, passwords,” as told by Benjamin Davidson, a consumer protections attorney with the FTC Division of Market Practices.
According to the FTC’s Consumer Sentinel Network, there were more than 2 million fraud reports in 2022. So far in 2023, there have been 1.1 million fraud reports.
Though the volume of fraud reports has not risen, Davidson pointed out that the amount of money that people are losing via fraud has increased.
Reportedly, $658 million was lost via digital-based scams, of which social media and phone calls were the top methods utilized by criminals. Phone call-based scams cost victims an average of $1,400 per person, according to those who’ve reported to the FTC.
Imposter scams in which the scammer claims to be a government official or a representative from a company, like Amazon or Wells Fargo, have become more prominent. In 2022, text message-based phishing scams were “the leading contact method for fraud complains,” Davidson said. In these scams, victims are often lured by a myriad of text-based methods: offering free gift cards, inquiring about fake package deliveries, alerts of fake job offers, tech support-related scams, and the most popular method, impersonating someone’s bank.
Most recently, AI software has advanced so much that criminals can use AI to clone voices.
Although the quality of AI-generated audio clips varies in quality and authenticity, disturbingly, some AI can mimic the voices of a person’s family members and friends in what’s often described as “family emergency scams.”
Davidson said that scammers obtain sound files of a victim’s family member, often through social media, and use software to clone that voice to pretend to be that family member in distress.
Similar to the visual equivalent of “deepfakes” — wherein faces can be superimposed onto other bodies in videos — voice-cloning technology is often believable.
“They say they’re in jail and need money to be bailed out; they’re traveling abroad, and they lost their passport and need money for a plane ticket; or they’re in a car accident. There’s always an emergency,” Davidson explained. “The consumers we talked to who later realized that the person they were speaking with was a scammer and not a loved one a really jarring experience.”
Last year, the FTC reported that victims over the age of 70 years old reported higher losses due to scams than the median or average individual loss.
Because AI-constructed fraud is becoming more difficult to suss out, Davidson suggested implementing a security question when they encounter a caller claiming to be a family member.
“It doesn’t need to be a fancy password arranged ahead of time,” Davidson said, suggesting asking questions with answers that only that family member would know, like, “What did we have for dinner last night?”
Over the last year, AI technology has exponentially grown in popularity for its uncanny ability to generate convincing emails, text messages, essays, and art pieces. Lawmakers across all levels of government are currently debating on how to regulate this new technology with supporters advocating for the convenience of AI and opponents warning against data breaches and privacy issues. (Klarize Medenilla/AJPress)