UK Government Announces $100M+ Plan to Eliminate 'Responsible' AI R&D | Tech Crunch

Spread the love

The UK government is finally publishing its response to the AI ​​regulatory consultation it launched last March, releasing a white paper emphasizing reliance on existing laws and regulators, along with “context-specific” guidance for ease of monitoring. A disruptive high-tech sector.

The full response will be available later this morning, so is not available for review at the time of writing. But in a press release ahead of publication the Department for Science, Innovation and Technology (DSIT) is spinning the plan as a boost through targeted measures to UK “global leadership” – including £100 million+ (~$125 million) of additional funding. Increase control of AI and increase innovation.

According to DSIT's press release, the extra funding will be £10 million (~$12.5 million) for regulators to “upskill” their expanded workload, meaning how to apply existing sectoral rules to AI developments and actually enforce existing laws on AI. Apps that violate the rules (this is assumed by developing their own technical tools).

“The fund will help regulators develop cutting-edge research and practical tools to monitor and address risks and opportunities in their sectors, from telecom and healthcare to finance and education. For example, it may include new technical tools to examine AI systems,” the DSIT wrote. It did not provide details on how many additional staff could be hired with the additional funding.

The government says £90 million (~$113 million) of funding will be used to set up nine research centers to promote homegrown AI innovation in areas such as health, maths and chemistry – a particularly big one – £90 million. Around the UK

The 90:10 split of funding indicates where the government wants most action to take place – with the bucket identifying 'indigenous AI development' as the clear winner here, while “targeted” implementation of associated AI security risks is seen as comparatively minor. Time add-on operation for regulators. (Although the government previously announced £100 million for an AI taskforce, the focus is on security R&D around advanced AI models.)

DSIT confirmed to TechCrunch that a £10m fund to expand regulators' AI capabilities has yet to be set up – saying the government is “moving fast” to set up the mechanism. “However, it is vital that we get it right to achieve our targets and ensure we are getting value for taxpayers' money,” a department spokesman told us.

The £90m funding for nine AI research hubs will last for five years from 1 February. “Funding has already been awarded with investments in nine centers ranging from £7.2 million to £10 million,” the spokesman said. They did not provide details on the focus of the other six research centers.

The top priority today is for the government to stick to its plan No Yet to introduce any new legislation for artificial intelligence.

“The UK government cannot rush to legislate, or risk implementing 'quick-fix' rules that will soon become outdated or ineffective,” wrote the DSIT. “Instead, the government's context-based approach means that existing regulators are empowered to address AI risks in a targeted way.”

It's not surprising that it continues this way – the government is facing an election this year that polls suggest it will almost certainly lose. So it looks like the administration is running out of time to write any laws. Certainly, time is running out in the current Parliament. (And, by the way, passing legislation on a complex technological topic like AI is clearly not in the current Prime Minister's gift on the political calendar.)

At the same time, the European Union has reached agreement on the final text of its own risk-based framework for regulating “trusted” AI – which looks set to be applied there from later this year. So the UK's strategy of opting out of legislating on AI and choosing to tread water on the issue has the effect of widening the gap entirely against the neighboring bloc, where, taking the opposite approach, the EU is now moving forward (and further. away from the UK's position) in implementing its AI legislation. By doing

The UK government will apparently make this strategy a big welcome mat for AI developers. Even if EU businesses count, disruptive high-tech businesses will also thrive on legal certainty — plus, the bloc is unveiling its own package of AI support measures — so which policies, sector-specific guidelines and prescribed set of legal risks, will help the most growth-charging AI “innovation” Attracts.

“The UK's proactive regulatory system simultaneously enables regulators to respond rapidly to emerging risks, while giving developers room to innovate and grow in the UK,” is DSIT's boosterish line.

(On Business Confidence, specifically, its release flags that “key regulators” including Ofcom and the Competition and Markets Authority (CMA) have been asked to publish their approach to managing AI by April 30 – it says it will look at them. Describe their current skill and expertise and a plan for how they will regulate AI in the coming year” – AI developers working under UK regulations are advised to prepare to read regulatory tee-leaves. , to calculate their own risk of getting into legal hot water, in multi-sector AI implementation priority plans.)

One thing is clear: UK Prime Minister Rishi Sunak remains very comfortable at TechBros – whether he's taking time off from his day job to conduct an interview with Elon Musk for streaming on his own social media platform; finding time in his packed schedule to meet CEOs of US AI giants to hear their 'existential risk' lobbying agenda; Or holding a “global AI security summit” to gather technology believers at Bletchley Park – so his decision to opt for a policy option that doesn't come with any tough new rules right now is undoubtedly the obvious choice for him and his government. .

On the other hand, the Sunak government seems to be in a hurry on another matter: when it comes to disbursing taxpayer funds to charge homegrown “AI innovation” – and, the hint here from DSIT is that these funds will be strategically targeted. Ensure that rapid high-tech developments are “responsible” (the question is what is “responsible” without a legal framework to define contextual boundaries).

As well as the aforementioned £90 million for the nine research centers behind DSIT's PR, there is an announcement of £2 million in Arts & Humanities Research Council (AHRC) funding to support new research projects “This will help define which responsibility. AI is appearing in fields such as education, policing and the creative industries. These are part of the AHRC's ongoing Bridging Responsible AI Divides (BRAID) programme.

In addition, £19 million will go to 21 projects to develop “innovative reliable and responsible AI and machine learning solutions” aimed at accelerating the deployment of AI technologies and increasing productivity. (“It is funded by the Accelerating Trustworthy AI Phase 2 competition, supported by the UKRI (UK Research & Innovation) Technology Missions Fund and delivered by the Innovate UK BridgeAI programme,” says DSIT.)

In a statement accompanying today's announcements, Secretary of State for Science, Innovation and Technology Michelle Donelan added:

The UK's innovative approach to AI regulation has made us a world leader in both AI safety and AI development.

I am personally driving AI's potential to transform our public services and economy for the better – leading to new treatments for devastating diseases such as cancer and dementia, and opening the door to advanced skills and technology that will power the British economy of the future.

AI is moving fast, but we've shown that humans can move just as fast. By taking a proactive, sector-specific approach, we can start to catch risks immediately, paving the way for the UK to become one of the first countries in the world to safely reap the benefits of AI.

Today's £100 million+ (total) funding announcements are in addition to the £100 million previously announced by the government for the aforementioned AI Safety Taskforce (which became the AI ​​Safety Institute) on so-called frontier (or foundational) AI models, according to DSIT. , confirmed this when we asked.

We also asked about the criteria and processes for awarding UK taxpayer funding to AI projects. We have heard concerns that the Government's approach sidesteps the need for a thorough peer review process – as proposals are not rigorously vetted in the rush to secure funding allocations.

A DSIT spokesman denied there had been any change to normal UKRI processes in response. “UKRI funds research on a competitive basis,” they pointed out. “Individual applications for research are assessed by academic and business-related independent experts. Each proposal for research funding is assessed by experts for excellence and, where applicable, impact.

“DSIT is working with regulators to finalize the specifics (project oversight), but it will focus on regulator projects that support the implementation of our AI regulatory framework, to ensure we are harnessing the transformative opportunities this technology offers. Minimizing the losses caused by it,” said the spokesperson.

On foundational model safety, DSIT's PR indicates that the AI ​​Safety Institute will “see the UK working with international partners to increase our ability to evaluate and investigate AI models”. And the government also announced a £9 million investment through the International Science Partnership Fund, which will be used to bring together researchers and innovators in the UK and US – “to focus on the development of safe, responsible and reliable AI”.

The department's press release describes the government's response as “making a pro-innovation case for more targeted binding requirements on the small number of organizations currently developing highly capable general-purpose AI systems, to ensure they are held accountable. Making these technologies sufficiently secure.

“This builds on the steps already being taken by the UK's specialist regulators to respond to AI risks and opportunities in their domains,” it adds. (And before that the CMA laid out a set of principles last fall that it says will guide its approach toward productive AI.) The PR also talks wildly about “partnering with the US on responsible AI.”

Asked for more details on this, the spokesperson said the aim of the partnership is to “bring together researchers and innovators in a bilateral research partnership with the US focused on developing safe, responsible and reliable AI, as well as AI for scientific uses”. It added that it expects “international teams to explore new methodologies for responsible AI development and use.”

“Enhancing common understanding of technology development between countries will improve inputs for international governance of AI and help generate research inputs for domestic policy makers and regulators,” said a DSIT spokesperson.

While they confirmed there would be no US-style 'AI safety and security' executive order issued by the Sunak government, the response to the AI ​​Regulation White Paper consultation, due to drop later today, sets out “next steps”.

Source link

Leave a Comment