Unsourced In, Unsourced Out: The Biggest Concerns of Generative AI Ethics

These are three of the biggest generative AI ethics concerns so far.

Generative AI has taken the world by storm. Conversational AIs like ChatGPT and Google’s Bard are delivering new and novel capabilities, and we’re just starting to understand some of what could be possible using these technologies. On the visuals side, tools like DALL-E and Midjourney AI are generating unique, surprisingly lifelike, sometimes terrifying imagery.

Even a quick glance at what this tech can do makes it obvious that there are huge, huge ethical concerns that need to be addressed. Lawmakers are already talking about regulating AI, but because of how quickly technology develops, there is often a lag time between when a technology releases and when we know what we should do with and about it.

Most of the tech industry understands and agrees that generative AI is, well, generating ethical concerns. These are three of the biggest generative AI ethics concerns so far.

Bias and Discrimination

One of the fundamental concepts with generative AI is that what you put in, you get out. Feed an AI model only Shakespeare and the results are going to sound vaguely Shakespearean. The same goes for science journals or slam poetry, or anything really.

So training an AI model is where ethical considerations start. What sources are included and excluded? Are image models training on images of diverse populations? Are large language models pulling from a wide range of perspectives? What about perspectives most of us would consider fringe or even extreme? (And who gets to decide what’s fringe or extreme?)

One solution is building teams of people with various perspectives and interests to help build these AI models. Doing so can avoid at least blatant racial or gender bias and discrimination and could help teams avoid blindspots. The end goal, of course, is creating AI models that don’t produce discriminatory or biased content.

Weighing the risks and benefits of various applications of AI against real-life experience in fields such as law and justice, medicine, and education allows industry leaders to make informed decisions when developing solutions to real-world problems.

Misuse for Malicious Purposes

With any new technology comes chances for opportunists to exploit it for personal gain. That’s certainly happening around generative AI. Hackers and cyber criminals are already using generative AI to create more convincing phishing attacks (“Write an email in the style of Apple Support”), deepfake videos (such as political figures convincingly saying things they never said), and other fraudulent content.

It’s easy to get a little alarmed here. These technologies are in their infancy, and they can already produce pretty convincing fictional content using well-known figures (like a trio of former US presidents playing video games).

As these technologies get even better, how will the average person know a source is not factual, original, or verified? How can generative AI be kept accountable?

Considering that unchecked generative AI can lead to severe consequences, such as identity theft, reputation damage, and financial loss, the flow of data must have built-in checks at each step, from data acquisition, to how it is processed, where and how long it is stored, and when and how it is added to the pool the AI draws from.

Legal and Ethical Responsibility

There are also plenty of questions about the legal, personal, and organizational responsibility for the output of generative AI. If generative AI-created content defrauds someone out of thousands of dollars, or destabilizes a government, or leads to any other seriously negative outcome, who’s responsible, if anyone?

On some level, policymakers have to get a handle on how this will work, and soon. As powerful as generative AI looks to be, we need to be sure the benefits outweigh the potential negative impacts.

Reply

or to participate.