Teens on social media need both protection and privacy – AI can help get the balance right

Spread the love


Meta announced on January 9, 2024 that it will protect teenage users on Instagram and Facebook by preventing them from viewing content it deems harmful, including content related to suicide and eating disorders. The move comes as the federal and state governments step up pressure on social media companies to provide security measures for teenagers.

At the same time, teens turn to their peers on social media for support they can't get elsewhere. Efforts to protect teenagers inadvertently make it harder for them to get help.

Congress has held several hearings in recent years about social media and its potential harm to youth. The CEOs of Meta, X – formerly known as Twitter – TikTok, Snap and Discord – are scheduled to testify before the Senate Judiciary Committee on January 31, 2024 about their efforts to protect minors from sexual exploitation.

According to a statement ahead of the hearing from the committee's chair and ranking member, Senators Dick Durbin (D-Ill.) and Lindsey Graham (D-Ill.), tech companies will “finally be forced to acknowledge their failures to protect children.” RS.C .), respectively.

I am a researcher studying online security. My colleagues and I study teenage social media interactions and the effectiveness of platforms' efforts to protect users. Research shows that even though teens face risk on social media, they also find peer support, especially through direct messaging. We identify a set of steps that social media platforms can take to protect users while also protecting their privacy and autonomy online.

What children are facing

The prevalence of dangers for teenagers on social media is well established. These risks range from harassment and bullying to poor mental health and sexual exploitation. Companies like Meta have found that their platforms exacerbate mental health issues, helping to make youth mental health one of the priorities of the US Surgeon General.

Much of the adolescent online safety research is from self-reported data such as surveys. More research is needed on young people's real-world private interactions and their perspectives on online risks. To address this need, my colleagues and I collected a large dataset of young adults' Instagram activity, including more than 7 million direct messages. We asked young people to annotate their own conversations and identify messages that made them feel uncomfortable or unsafe.

Using this dataset, we found that direct interactions are critical for youth seeking support on issues ranging from daily living to mental health issues. Our finding suggests that these channels are used by young people to discuss their public interactions in greater depth. In settings based on mutual trust, it's safe for teens to ask for help.

Research suggests that the privacy of online speech plays an important role in young people's online safety, and at the same time a significant amount of harmful interactions on these platforms come in the form of private messages. Unsafe messages flagged by users in our dataset include harassment, sexual messages, sexual solicitations, nudity, obscenity, hate speech, and the sale or promotion of illegal activities.

However, using automated technology to detect and prevent online risks for teens has become increasingly difficult as platforms are pressured to protect user privacy. For example, Meta has implemented end-to-end encryption for all messages on its platforms to ensure that message content is secure and accessible only by participants in conversations.

Also, Meta has taken measures to prevent suicidal and eating disorder content by keeping that content from public posts and search even if posted by a teenage friend. This means that the young person who shares that content is left alone without the support of their friends and peers. Additionally, Meta's content strategy does not address unsafe interactions in private conversations that young people engage in online.

Striking the balance

However, the challenge is to protect young consumers from having their privacy invaded. To that end, we conducted a study to determine how minimal data can be used to detect insecure messages. We want to understand how various features or metadata of risky conversations, such as conversation length, average response time, and relationships of conversation participants, can contribute to machine learning programs that detect these risks. For example, previous research has shown that risky conversations tend to be short and one-sided when strangers make unwanted advances.

We found that our machine learning program could detect insecure conversations 87% of the time using only the metadata for the conversations. However, analyzing the text, images and videos of conversations is the most effective way to determine the type and severity of the risk.

These results highlight the importance of metadata for distinguishing insecure communications and can be used as a guide for platforms to build artificial intelligence risk detection. Platforms can use high-level features such as metadata to block malicious content without scanning that content and thereby violating users' privacy. For example, a persistent bully that a teenager wants to avoid produces metadata — repetitive, brief, one-way communications between unconnected users — that an AI system can use to block the bully.

Ideally, young people and their caregivers are given the choice by design to be able to turn on encryption, risk detection, or both, so that they can decide the trade-offs between privacy and security for themselves. (The Conversation) AMS



Source link

Leave a Comment