The Malicious Potential of ChatGPT on Social Media

The potential uses and misuses of AI on social media, along with the ethical considerations, potential for regulation, and strategies to mitigate unethical use of these technologies.

AI tools like ChatGPT have the potential to do a lot of good, but let’s be honest: there are risks.

Especially in the social media arena, it’s easy to see how these tools could go wrong. Tools that can say anything convincingly, sound like anyone, and even look like anyone? Someone’s going to find a way to use them to do some pretty unsettling things.

This week’s newsletter will walk through the potential uses and misuses of AI on social media, along with the ethical considerations, potential for regulation, and strategies to mitigate unethical use of these technologies.

The Promise and Potential of AI on Social Media

AI tools do have some upsides on social media. Systems powered by artificial intelligence and machine learning are already hard at work producing personalized content recommendations (to varying degrees of accuracy). If you’ve ever used TikTok, it’s almost scary how quickly the algorithm can figure out what you like and then serve up a near-uninterrupted stream of exactly that thing.

AI tools are also enhancing content moderation and spam detection efforts. Again these aren’t foolproof, but they can screen content at a scale that human moderators never could.

Another area with plenty of promise is chatbot-based customer service. Many brands are already using programmed (“canned”) AI chatbots on their sites, and a new wave of conversational chatbots shows a lot of promise. These chatbots could also be deployed via social as part of an omnichannel customer support strategy: customers could then chat with brands via Messenger, for example, and bots would handle basic or first-level queries.

Unintended Consequences: Malicious Use of AI on Social Media

Though AI has legitimate and beneficial uses related to social media, unleashing a new wave of AI products based on LLMs like ChatGPT could have unintended consequences. Experts are already sounding the alarm on specific ways this technology could be weaponized, especially in the context of social media.

The most obvious is the risk of AI-generated content spreading misinformation and fake news. We’re not talking about your crazy conspiracy-theorist aunt here: we’re talking about state-sanctioned disinformation campaigns that no longer require a strong grasp of native-level English proficiency.

Despite some basic guardrails, ChatGPT can be tricked into producing misinformation (for example, it won’t tell you the earth is flat, but if you ask it to prepare you to debate someone who believes it, you might convince it to summarize their best “arguments”). Other similar LLM tools have no such guardrails, and it’s easy to imagine a future where a nation state could weaponize their own “BadActorGPT.”

Another worrying trend is how with deepfake videos and AI-powered voice cloning technology, anyone can be made to appear to say or do anything. So far we’ve seen these tools used for comedic purposes such as this (warning: profanity-laced) parody video of three former US presidents playing Wii Sports Golf.

Now, it doesn’t take long to “hear through” that video and realize it isn’t real. But at the same time, any moderately politically informed American would instantly recognize those voices as Biden, Trump, and Obama. It isn’t a stretch to say that, in the right hands, these technologies could be used for ultra convincing political disinformation, or even identity theft or cyberbullying.

In the hands of the average teenager, this tech may not be sufficiently advanced to fool all of society, but on an individual level it could still be quite harmful.

Ethical and Regulatory Challenges

As we’ve already seen, there are some worrying ethical concerns around the use of AI tools in the social media sphere. We’ve grown accustomed to believe that what we see from real people on social is a) from real people and b) what they’re actually saying and thinking in their own brains.

A landscape where we’re no longer sure of either is uncomfortable, to put it mildly.

This leads to the natural question of regulation: some of the biggest names in AI are already calling for government to step in and regulate the industry. This seems an awful lot like a cop-out or dodging of responsibility, but the truth is, these leaders know that solving these ethical problems via regulation is not going to be simple — if it’s even possible at all.

One concept gaining some traction is the idea of digital watermarking. So far both OpenAI (maker of ChatGPT) and Google have committed to doing so…eventually.

First they just have to figure out how. And that’s one of the big criticisms here: “unleash a potentially dangerous product into the world first, figure out how to make it less dangerous second” might not be the most noble approach.

And digital watermarking only works if everyone agrees to do it. With countless ___GPT clones out there, someone’s going to make it possible to make this stuff watermark-free. (And it could be a rival nation outside the US’s regulatory influence.)

Mitigation Strategies

While these problems are concerning, several mitigation strategies could help.

First, social media platforms could introduce better content moderation algorithms to screen out or flag AI-generated content. This is a long way off, though: we don’t have the technology right now to reliably detect what’s AI-written and what’s not, and see the above discussion on watermarking.

Second, the media and government have a role to play in educating the public on what is and isn’t possible with AI, plus how to spot signs of disinformation.

Third, some services (like voice clones or video avatar services) must institute stricter verification procedures to ensure people aren’t cloned without consent. Controls on who, how, and why those clones can be used would be a good development as well.

All in all, the future of these technologies in the social media realm is both exciting and concerning. There’s great potential for good and for harm, and our capabilities at the moment are outpacing our abilities to reliably detect, control, filter, and block the bad stuff.

Regulation seems inevitable at this point. Here's hoping the powers that be get it right.

Reply

or to participate.