Ofcom report finds 1 in 5 harmful content 'one click gateway' to more toxic search results | Tech Crunch

Spread the love


Go to TikTok. Ofcom, the UK regulator now enforcing the official online safety law, is preparing to take on an even bigger target: search engines like Google and Bing and the role they play in displaying self-harm, suicide and other harmful content on click. A button, especially for younger users.

A report commissioned by Ofcom and produced by the Network Epidemic Research Institute found that major search engines such as Google, Microsoft's Bing, DuckDuckGo, Yahoo and AOL have become “one-click gateways” to such content by facilitating easy, quick access to web pages, images. And videos – one in five search results for basic self-injury terms link to more harmful content.

The research is timely and important because much of the recent focus has been on harmful content online, surrounding the influence and use of walled garden social media sites such as Instagram and TikTok. As open-ended sites such as Google.com attract more than 80 billion visits per month, compared to TikTok's monthly actives, the new research is important in helping Ofcom understand and gather evidence about whether there is a much larger potential threat. About 1.7 billion users.

“Search engines are often the starting point for people's online experience, and we are concerned that they can act as one-click gateways to seriously harmful self-harm content,” said Almudena Lara, director of online safety policy development at Ofcom. Advertisement “Search services need to understand their potential risks and the effectiveness of their safeguards – particularly for keeping children safe online – ahead of our wider consultation in the spring.”

Ofcom said researchers analyzed links to around 37,000 results from those five search engines for the report. Using both simple and more cryptic search terms (cryptic to try to avoid basic screening), they ran searches deliberately turning off “safe search” parental screening tools to mimic the most basic ways with search engines and, at worst, people. – Case scenarios.

As you can imagine the results are in many ways sinister and damning.

Not only did single-click links generate 22% of the search results to harmful content (including suggestions for various forms of self-harm), the content accounted for 19% of the most links in the results (and 22). %) of links on the first pages of results.

Image searches are particularly unusual, the researchers found, with a full 50% of searches returning malicious content, followed by web pages at 28% and video at 22%. One reason some of these aren't screened better by search engines is that algorithms tend to confuse self-harm images with medical and other legitimate media, the report concluded.

Secret search terms are also better at evading screening algorithms: they are six times more likely to lead a user to malicious content.

One thing not touched on in the report, but likely to become a bigger issue over time, is the role that generative AI searches will play in this space. So far, it seems that more controls are being put in place to prevent platforms like ChatGPT from being misused for malicious purposes. The question is whether users can figure out how to game it and what it leads to.

“We are already working to create a deeper understanding of the opportunities and risks of new and emerging technologies so that innovation can flourish while consumer safety is protected. Some applications of generative AI are likely to fall within the scope of the Online Safety Act and we expect services to assess the risks associated with its use when carrying out their risk assessment,” an Ofcom spokesperson told TechCrunch.

It's not all a nightmare: around 22% of search results are also flagged as helpful in a positive way.

The report may be used by Ofcom to get a better idea of ​​the problem, but it's an early sign for search engine providers that they need to prepare to act. Ofcom has already made it clear that children will be the first focus in implementing the Online Safety Bill. In the spring, Ofcom plans to launch a consultation on protection of children codes of practice, with the aim of setting out “practical steps that search services can take to adequately protect children”.

This includes taking steps to reduce children's exposure to harmful content about sensitive topics such as suicide or eating disorders across the Internet, including through search engines.

“Ofcom can expect appropriate action to be taken against tech firms in the future who do not take this seriously,” an Ofcom spokesman said. That includes fines (which Ofcom says it will only use as a last resort) and, in worse cases, court orders to block ISPs from accessing non-compliant services. Executives who oversee services that violate the rules may also be held criminally liable.

So far, Google has taken issue with some of the report's findings and how it categorizes its efforts, saying that its parental controls do too much to invalidate some of these findings.

“We are fully committed to keeping people safe online,” a spokesperson said in a statement to TechCrunch. “Ofcom's study does not reflect the safeguards we have in place in Google search and the terms of reference that are rarely used in search. Our SafeSearch feature, which filters out harmful and disturbing search results, is on by default for users under 18, while the SafeSearch Blur setting – a feature that obscures explicit images such as self-harming content – ​​is on by default for all accounts. We work closely with specialist organizations and charities to ensure that crisis support resource panels appear at the top of the page when people come to a Google search for information about suicide, self-harm or eating disorders. Microsoft and DuckDuckGo have not yet responded to a request for comment.



Source link

Leave a Comment