LOGO

UK AI Body Renamed, Focuses on Security - Anthropic Partnership

February 14, 2025
UK AI Body Renamed, Focuses on Security - Anthropic Partnership

U.K. Shifts Focus of AI Institute to Security

The United Kingdom government is strategically reorienting its efforts to stimulate economic growth and industrial advancement through AI. As part of this initiative, a key institution established just over a year ago is undergoing a significant transformation in purpose.

The Department of Science, Industry and Technology has announced the renaming of the AI Safety Institute to the “AI Security Institute.” (The URL will remain consistent due to the shared initial letters.)

From Safety to Security

This change signifies a shift in focus from primarily investigating areas such as existential risks and bias within large language models, to prioritizing cybersecurity.

Specifically, the institute will concentrate on bolstering defenses against the potential threats AI poses to national security and criminal activity.

New Partnership with Anthropic

Concurrently, the government revealed a new collaboration with Anthropic.

While specific services haven’t been detailed, a Memorandum of Understanding (MOU) outlines plans to explore the integration of Anthropic’s AI assistant, Claude, into public services.

Anthropic will also contribute to scientific research and economic modeling efforts.

Furthermore, the company will provide tools for evaluating AI capabilities, with a focus on identifying security vulnerabilities.

Anthropic’s Vision

AI holds the potential to revolutionize how governments serve their citizens,” stated Dario Amodei, co-founder and CEO of Anthropic.

“We anticipate exploring how Anthropic’s AI assistant Claude can assist UK government agencies in enhancing public services, aiming to discover innovative methods for making essential information and services more efficient and accessible to UK residents.”

Expanding Collaboration

Anthropic represents the first partnership announced during a week of AI-focused events in Munich and Paris.

However, it is not the sole entity collaborating with the government.

A series of newly unveiled tools, introduced in January, were all powered by OpenAI.

Peter Kyle, the secretary of state for Technology, previously indicated the government’s intention to work with a diverse range of foundational AI companies, a commitment that the Anthropic agreement demonstrates.

A Predictable Evolution

The government’s rebranding of the AI Safety Institute – launched with considerable attention just over a year ago – to AI Security should not be entirely unexpected.

The Labour government’s AI-centric Plan for Change, unveiled in January, notably omitted the terms “safety,” “harm,” “existential,” and “threat.”

This omission was deliberate.

Prioritizing Economic Growth

The government’s strategy centers on stimulating investment in a modernized economy, leveraging technology, and particularly AI, to achieve this goal.

It seeks closer collaboration with Big Tech and aims to foster the development of domestic technology giants.

Consequently, the primary messages being promoted emphasize development, AI, and further development.

Civil servants will have access to an AI assistant named “Humphrey,” and are encouraged to share data and utilize AI to streamline their workflows.

Citizens will benefit from digital wallets for their government documents and AI-powered chatbots.

Progress Over Caution?

Does this shift mean AI safety concerns have been resolved?

Not precisely, but the prevailing sentiment suggests that these concerns cannot impede progress.

Maintaining Core Principles

The government asserts that despite the name change, the institute’s fundamental mission will remain consistent.

“The changes I’m announcing today represent the logical next step in how we approach responsible AI development – helping us to unleash AI and grow the economy as part of our Plan for Change,” Kyle explained.

“The work of the AI Security Institute won’t change, but this renewed focus will ensure our citizens – and those of our allies – are protected from those who would look to use AI against our institutions, democratic values, and way of life.”

Continued Focus on Security

Ian Hogarth, the institute’s chair, added, “The Institute’s focus from the start has been on security and we’ve built a team of scientists focused on evaluating serious risks to the public.”

“Our new criminal misuse team and deepening partnership with the national security community mark the next stage of tackling those risks.”

Global Shifts in AI Priorities

Looking beyond the U.K., priorities surrounding “AI Safety” appear to be evolving.

The primary concern for the AI Safety Institute in the U.S. is currently the possibility of its dissolution.

U.S. Vice President J.D. Vance hinted at this during a speech in Paris earlier this week.

TechCrunch has an AI-focused newsletter! Sign up here to get it in your inbox every Wednesday.

#AI security#UK AI#Anthropic#AI regulation#AI safety institute