Stay informed with free updates
Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.
The US will examine the national security implications of new AI models from Google’s DeepMind, Microsoft and xAI before they are released to the public, as officials seek greater oversight of the powerful technology.
The Center for AI Standards and Innovation, which sits under the commerce department, on Tuesday said it had signed a deal with the tech groups to “conduct pre-deployment evaluations and targeted research”.
It said the reviews would enable the government “to better assess frontier AI capabilities and advance the state of AI security”.
The move comes as the White House considers further measures to assess advanced models before they are widely released, said people familiar with the matter.
Advisers to US President Donald Trump have mulled an executive order to impose these assessments, although discussions are at an early stage.
Senior US officials have been spooked by early versions of Anthropic’s new Mythos model, which the company has said has a much greater ability to identify and exploit cyber security vulnerabilities.
Anthropic’s chief executive Dario Amodei met White House chief of staff Susie Wiles last month, in a sign of a détente between the AI lab and the White House.
The start-up had been labelled a national security threat for refusing to allow the Pentagon unrestricted use of its technology. Anthropic is suing the administration over the designation.
Trump later struck a conciliatory tone, telling CNBC in an interview that Anthropic was “shaping up” and “I think we will get along with them just fine”.
Monday’s agreement is similar to one signed with Anthropic and OpenAI during Joe Biden’s administration two years ago, when Caisi was known as the US Artificial Intelligence Safety Institute. Under that deal, which mirrored the UK’s policy, the government could gain access to models before their public release in order to assess and mitigate safety risks.
The earlier agreements have enabled more than 40 such evaluations to date, Caisi said. Researchers at the agency are routinely provided with access to new models with safeguards removed or reduced, so they can assess the tech’s capabilities and risks.
The evaluations focus on AI capabilities that may pose risks to national security, with particular emphasis on cyber security, biosecurity and chemical weapons. The agency also leads assessments of AI systems developed in China, and co-ordinates findings with the Pentagon, the White House and intelligence agencies.
“Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications,” agency director Chris Fall said. “These expanded industry collaborations help us scale our work in the public interest at a critical moment.”
Last month, tech industry representatives and AI safety campaigners called on Congress to appropriate more funding to Caisi, to help “address the complex challenges presented by AI systems”.
Credit: Source link









