OpenAI Forms New Team to Study Child Safety | Tech Crunch

Spread the love

Under scrutiny from activists — and parents — OpenAI has formed a new team to study ways to prevent kids from misusing or abusing its AI tools.

In a new job listing on its careers page, OpenAI reveals the existence of a child safety team, which the company says will work with platform policy, legal and investigation groups within OpenAI, as well as outside partners, to “manage processes, incidents and reviews.” ” is relevant to younger users.

The team is currently looking to hire a child safety enforcement specialist, who will be responsible for applying OpenAI's policies in the context of AI-generated content and working on review processes for “sensitive” (perhaps child-related) content.

Tech vendors of a certain size devote a fair amount of resources to complying with laws such as the US Children's Online Privacy Protection Rule, which sets out controls on what data companies can and cannot access on the web. can collect on them. So OpenAI's hiring of child safety experts isn't entirely surprising, especially if the company expects to one day have significantly younger users. (OpenAI's current terms of use require parental consent for children ages 13 to 18 and prohibit use by children under 13.)

But the new team was formed several weeks after it announced a partnership with Common Sense Media to collaborate on child-friendly AI guidelines and got its first education customer, indicating a wariness on OpenAI's part in implementing policies. AI use of miners — and negative press.

Children and teenagers are increasingly turning to GenAI tools for help not only with schoolwork but also with personal problems. According to a poll from the Center for Democracy and Technology, 29% of children reported using ChatGPT to deal with anxiety or mental health issues, 22% for problems with friends and 16% for family disputes.

Some see this as a growing danger.

Last summer, schools and colleges banned ChatGPT due to fears of theft and misinformation. Since then, some have reversed their ban. But not everyone is convinced of GenAI's potential, pointing to surveys such as the UK's Safer Internet Center, which found that half of children (53%) have seen people their age use GenAI in a negative way – for example creating a believable mistake. Information or images used to confuse someone.

In September, OpenAI published documentation for ChatGPT in Classrooms, along with prompts and FAQs to provide educators guidance on using GenAI as an instructional tool. In one of the support articles, OpenAI noted that its tools, specifically ChatGPT, “may produce output suitable for all audiences or all ages” and advised “caution” about exposure to children – even those who meet age requirements.

Calls for guidelines on children's use of GenAI are growing.

The UN Educational, Scientific and Cultural Organization (UNESCO) late last year urged governments to regulate the use of GenAI in education, including enforcing age restrictions on users and safeguarding data protection and user privacy. “Generative AI can be a wonderful opportunity for human development, but it can also cause harm and prejudice,” UNESCO Director-General Audrey Azoulay said in a press release. “It cannot be integrated into education without public engagement and necessary safeguards and regulations from governments.”

Source link

Leave a Comment