We need to know how AI firms fight deepfakes

Spread the love


When people worry about artificial intelligence, it's not just because of what they see in the future, but because of what they remember from the past — especially the toxic effects of social media. Over the years, misinformation and hate speech have evaded the policing systems of Facebook and Twitter and spread around the world. Now deepfakes are infiltrating those same platforms, and while Facebook is still responsible for how bad content is distributed, the AI ​​companies that make them also have a cleanup role. Unfortunately, like social media companies before them, they are conducting that work behind closed doors.

I contacted a dozen productive AI companies that can produce photorealistic images, videos, text, and voices, asking how they made sure their users followed their rules.(1) Ten replied, all confirming that they used software to monitor. That's what their users have revealed, and many say there are humans checking those systems. No one has agreed to disclose how many humans are responsible for overseeing those systems.

And why should they? Unlike other industries such as pharmaceuticals, autos and food, AI companies have no regulatory obligation to disclose details of their security practices. They, like social media companies, can be as secretive about that work as they want and will remain so for years to come. Europe's upcoming AI law touts “transparency requirements,” but it's unclear whether it will force AI firms to audit their security practices in the same way that carmakers and food manufacturers do.

For those other industries, it took decades to adopt stricter safety standards. But the world can't afford to have free reign for long when AI tools are developing so rapidly. Midjourney recently updated its software to create photorealistic images that show politicians' pores and fine lines. At the start of a huge election year, when nearly half of the world is going to the polls, a gaping, regulatory vacuum means AI-generated content could have a devastating impact on democracy, women's rights, the creative arts, and more.

Here are some ways to solve the problem. One is to encourage AI companies to be more transparent about their security practices, which starts with asking questions. When I approached OpenAI, Microsoft, Midjourney, and others, I simplified the questions: How do you enforce your rules using software and humans, and how many people do that?

Many are willing to share several paragraphs of detail about their processes for preventing abuse (albeit with vague public-relations speak). For example, OpenAI has two teams of people working on retraining their AI models to make them safer or respond to malicious outputs. The company behind controversial image generator Stable Diffusion said it used security “filters” to block images that violated its rules, and human moderators checked flagged prompts and images.

As you can see from the table above, only a few companies disclosed how many humans were employed to monitor those systems. Think of these humans as internal security auditors. On social media they are known as content moderators and play a challenging but crucial role in double-checking content that social media algorithms flag as racist, misogynistic or violent. Facebook has more than 15,000 moderators that protect the integrity of the site without stifling user freedom. It's a delicate balance that humans do best.

Sure, with their built-in security filters, most AI tools don't remove the toxic content that people do on Facebook. But they could make themselves safer and more reliable if they hired more human moderators. Humans are the best stopgap in the absence of better software to catch malicious content, which has so far proven lacking.

Obscene deepfakes of Taylor Swift and voice clones of President Joe Biden and other international politicians have gone viral, to name a few examples, emphasizing that AI and tech companies are not investing enough in security. Admittedly, hiring more humans to help them enforce their rules is like getting more buckets of water to put out a house fire. It may not solve the entire problem but it may improve it temporarily.

“If you're a startup building a tool with a productive AI component, it's smart and important to hire humans at various points in the development process,” says Ben Whitelaw, founder of Everything in Moderation, a newsletter about online security. .

Many AI firms have admitted to having only one or two human moderators. Video generation company Runway says its own researchers have done just that. Script, which makes a voice-cloning tool called Overdub, says it only checks a sample of cloned voices to make sure it matches the consent statement read by customers. A spokesperson for the startup argued that inspecting their work would violate their privacy.

AI companies have unprecedented freedom to conduct their work in secret. But if they want to ensure the trust of the public, regulators and civil society, it is in their interests to pull back the curtain further to show how they will enforce their rules. Hiring a few more people isn't a bad idea either. Too much focus on racing to make AI “smarter” so that fake photos look more realistic, or more simple text or cloned voices seem more believable, risks leading us deeper into a dangerous, confusing world. Better to bulk up and expose those security measures now before things get even more difficult.

Also, read these top articles today:

Is Facebook confusing? Facebook couldn't copy or earn its way to another two decades of prosperity. Is CEO Mark Zuckerberg ready for that? Facebook is like an abandoned amusement park of poorly executed ideas, analyst says. Interesting? Check it out here. Go ahead and share this with everyone you know.

Elon Musk's Twitter acquisition is still in court! A court wants Elon Musk to testify before the US SEC that he violated laws related to his purchase of Twitter. Find out where things are here.

No AI play in Tesla? Analysts highlight this point and it is a problem for Tesla. Some interesting details in this article. Check it out here. If you enjoyed reading this article, please forward it to your friends and family.



Source link

Leave a Comment