Artificial intelligence firms OpenAI and Anthropic recently agreed to give the US AI Safety Institute early access to any “major” new AI models developed by their respective firms.
While the deal was purportedly signed over mutual safety concerns, it’s unclear exactly what the government’s role would be in the event of a technology breakthrough with significant safety implications.
OpenAI CEO and cofounder Sam Altman regarded the deal as a necessary step in a recent post on the X social media platform.
“We are happy to have reached an agreement with the US AI Safety Institute for pre-release testing of our future models. For many reasons, we think it’s important that this happens at the national level. US needs to continue to lead!”
Frontier AI models
OpenAI and Anthropic are both working towards the same goal: the development of an artificial general intelligence (AGI). There’s no scientific consensus as to exactly how AGI should be defined, but the basic idea is an AI capable of doing anything a human could do given the necessary resources.
Both companies, separately, have charters and missions explaining that it’s their independent goals to create AGI safely with humanity’s interests at its core. If one or both companies were to succeed, they would become the de facto gatekeepers of technology capable of imbuing machines with the intelligence of a human.
In inking a deal with the US government to disclose their models prior to any product launches, both companies have punted that responsibility up to the federal government.
Area 51
As Cointelegraph recently reported, OpenAI may be on the verge of a breakthrough. The company’s “Strawberry” and “Orion” projects are purported to be capable of advanced reasoning and new techniques for dealing with AI’s hallucination problem.
According to a report from The Information, the US government has already seen these tools and any early internal iterations of ChatGPT with them implemented.
A blog post from the US National Institute of Standards and Technology indicates that this deal and the safety guidelines surrounding it are all voluntary for the companies participating.
The benefit to this form of light touch regulation, according to the companies and analysts supporting it, is that it spurs growth and allows the sector to regulate itself. Proponents, such as Altman, cite this agreement as an example of exactly how such cooperation can benefit both the government and the corporate world.
However, one of the potentially concerning side effects of light-touch regulation is a lack of transparency. If OpenAI or Anthropic succeed in their mission, and the government feels the public doesn’t have a need to know, there doesn’t appear to be any legal impetus requiring disclosure.
Related: Anthropic CEO says future of AI is a hive-mind with a corporate structure