Safety by Design | Tech Crunch

Spread the love


W
For TechCrunch Exchange, a weekly newsletter of startups and markets. It was inspired by the daily TechCrunch+ column from which it gets its name. Want this in your inbox every Saturday? Sign up here.

Tech's ability to reinvent itself The cycle has its downsides: it means ignoring hard truths that others have already learned. But the good news is that new entrepreneurs sometimes find themselves faster than their predecessors. – Anna

AI, trust and security

This year is an Olympic year, a leap year. . . And also The Election year. But before you accuse me of US default, I'm not just thinking about the Biden vs. Trump sequel: more than 60 countries are holding national elections, not to mention the EU Parliament.

How these votes turn out will affect tech companies; For example, different parties have different measures on AI regulation. But before the elections are held, technology also plays a role in guaranteeing their integrity.

Election integrity wasn't on Mark Zuckerberg's mind when he created Facebook and probably even bought WhatsApp. But 20 and 10 years later, respectively, trust and security are now a responsibility Meta and other tech giants can't avoid, whether they like it or not. This means preventing misinformation, fraud, hate speech, CSAM (child sexual abuse material), self-harm and more.

However, AI makes the job more difficult, and not just because of deepfakes or empowering a large number of bad actors. Lotan Lewkowitz, a general partner at Grove Ventures, said:

All of these trust and safety platforms have this hash-sharing database, so I can upload something bad there, share it with all my communities, and everybody's going to stop it together; But today, I can train the model to try to avoid that. So the more classic trust and safety work is becoming tougher and tougher because of Gen AI, because the algorithm helps bypass all of these things.

From afterthought to forefront

While online forums have already learned a thing or two about content moderation, Facebook didn't have a social network playbook to follow when it was born, so it's somewhat understandable that the task will need some time to grow. But as of 2017, it is disheartening to learn from internal meta documents that there is an inherent reluctance to adopt measures to better protect children.

Zuckerberg was one of five social media tech CEOs who attended a recent US Senate hearing on children's online safety. It should also be noted that Discord is not the first to witness the meta; It extends beyond its gaming roots, a reminder that trust and security can be threatened in many online spaces. This means that a social gaming app, for example, could put its users at risk of phishing or grooming.

Do new companies acquire faster than FAANGs? It's not guaranteed: entrepreneurs often work from first principles, which is good and bad; The content moderation learning curve is real. But OpenAI is much younger than Meta, so it's encouraging to hear that it's setting up a new team to study child safety — which may be subject to scrutiny.

However, some startups don't wait for signs of trouble to act. ActiveFence, an AI-enabled trust and safety solutions provider and part of the Grove Ventures portfolio, is seeing more inbound requests, its CEO Noam Schwartz told me.

“I've seen a lot of people come to our team from companies that have just been founded or have already started. They're thinking about the security of their products at the design stage (and) adopting the concept of security by design. They're thinking about security and privacy in their products today, just like you're thinking about security and privacy when you're building your features. Actions are baking.

ActiveFence is not the only startup in this space, which Wired describes as “trust and security as a service.” But it's big, especially since it bought Spectrum Labs in September, and it's nice to hear that its clients include not only big names fearing PR crises and political scrutiny, but also smaller teams just starting out. Tech can also learn from past mistakes.





Source link

Leave a Comment