- Everyday Automations
- Posts
- AI’s Dark Side: The Growing Threat of AI Scams
AI’s Dark Side: The Growing Threat of AI Scams
AI technologies and automation hold so much promise to make our lives better and our work easier. But as these technologies continue to evolve and improve, we all need to stay aware of the dark side of AI: its potential for abuse.

We usually keep our updates positive and upbeat, but buckle up: this week’s going to be a little more ominous.
AI technologies and automation hold so much promise to make our lives better and our work easier. But as these technologies continue to evolve and improve, we all need to stay aware of the dark side of AI: its potential for abuse.
So this week, we’re diving into the world of AI scams and AI scam technology. We’ll start with one of the biggest (and maybe scariest): Deepfake scams.
Deepfake Technology: Entertainment with Plenty of Fraud Potential
If you haven’t heard of deepfakes, these are AI-generated videos of real people (think celebrities and politicians) that look at least mildly convincing — and they’re getting better every month.
It’s entertaining enough to see a video of some lauded celebrity doing something ridiculous or silly, but you don’t have to imagine too hard to see where this could go.
Imagine a deepfake video of a politician announcing a (nonexistent) nuclear strike. Or a deepfake that puts a high-ranking business executive into an explicit compromising situation. On a smaller scale, a video of a mid-level politician stating misinformation could stoke political unrest, and a convincing video of your boss asking you to move files or money around could dupe you into compromising your business!
Technically, the term “deepfake” refers to the video component. But AI is also powering voice clones that are equally impressive. Search YouTube or TikTok for videos of US presidents playing popular video games to get an idea of how this technology can work, even using tools available to the typical consumer. (Warning: just about every one of those videos is chock full of adult language, so we won’t link to them here.)
The Next-Gen Grandma Scam
One particularly upsetting AI scam is what we’re calling the next-gen Grandma Scam. You’ve probably heard of the scam where some rando calls an elderly person (or messages them over chat apps) and claims to be their grandchild. The “grandchild” is in trouble — maybe even in jail — and needs money immediately to avoid certain calamities.
That scam was disgusting enough. But now, with easily available tools creating voice clones out of just a few snippets of a person’s voice, scammers can send audio that sounds just like someone’s actual grandchild!
Impacts and Potential Impacts of AI Scams
Take the next-gen Grandma Scam: the FTC estimates that this exact scam bilked people out of around $11 million last year. The same basic voice fraud approach can be used to impersonate doctors, lawyers, and just about any other professional that might legitimately require payment over the phone.
One bank manager in Hong Kong got fooled by a deepfake of a high-ranking bank director. The manager transferred an eye-popping $35 million, which vanished without a trace.
Emerging technologies with major financial potential will always be a popular scam target. That’s why deepfakes pretending to be Elon Musk and an executive at Binance (a crypto firm) both sucked people into various Crypto scams.
Perhaps most worrisome is how this technology could target regular folks like small business owners. If your voice and likeness are already out there (say, in YouTube videos or Instagram Reels promoting your business), it’s already possible for someone to create a deepfake of you.
Imagining the ways this could harm your business, your professional reputation, and even your personal life — it’s not a pretty picture.
Prevention: How to Stay Safe and Not Get Fooled
At this point, the average person's risk of personally being targeted remains low. Staying safe from a new generation of AI scams requires vigilance. Just like you shouldn’t trust a single email (which could be a phishing email), you shouldn’t trust a single phone call. Suppose the person on the other end requesting money is legitimately a lawyer or bank. In that case, they can send secondary verification in some way: a real-time email from a company account, for example.
The risk of being deepfaked — that is, having someone create a deepfake of you — is a tricky one. Short of deleting your entire online presence and never speaking on the phone (or even into a microphone!), you can’t keep your voice anonymous. Government agencies are starting to discuss regulation and detection technologies, but these are still in the future.
So, to sum up: be safe. Stay vigilant. And don’t trust everything you see or hear. If it seems off — even in the slightest — find another method to confirm before taking action.
Reply