A consulting firm called Gladstone AI published a report commissioned by the State Department this week that recommended more government involvement with the development of artificial intelligence (A.I.) to avert “urgent and growing risks to national security,” which could metastasize into an “extinction-level threat to the human species.”
The proposed remedy is to create a new government agency in charge of policing A.I., while restricting A.I. development with heavy-handed regulations.
The report, which is about 250 pages long, is entitled An Action Plan to Increase the Safety and Security of Advanced AI. It was commissioned by the State Department shortly before the release of ChatGPT, the search engine that has become the first contact with artificial intelligence technology for many people.
The results of those encounters have been decidedly mixed, ranging from blatant disinformation and political censorship to the A.I. appearing to suffer from a string of nervous breakdowns. Conversely, human users quickly found ways to abuse ChatGPT’s powerful capabilities.
Gladstone AI did not find this a promising start to mankind’s relationship with machine intelligence. The report’s authors were particularly concerned about the next step in evolution: artificial general intelligence (AGI), a “transformative technology with profound implications for democratic governance and global security.”
AGI refers to advanced A.I. systems that can “outperform humans across all economic and strategically relevant domains, such as producing practical long-term plans that are likely to work under real world conditions.”
The nightmare scenario is “loss of control,” which the report defined as “a potential failure mode under which a future A.I. system could become so capable that it escapes all human effort to contain its impact.”
The consequences of loss of control could escalate into the Information Age equivalent of a weapon of mass destruction, a catastrophe “up to, and including, events that would lead to human extinction.”
The report authors borrowed this concept from Sam Altman, the CEO of OpenAI, creators of ChatGPT. Altman was one of 300 signatories to a public “statement of A.I. risk” in May 2023 that said “mitigating the risk of extinction from A.I. should be a global priority,” on par with mitigating the risk from pandemics and nuclear wars.
Altman felt it was impossible to halt, or even meaningfully pause, A.I. research, “because if people in America stop, people in China wouldn’t.” He urged the development of precautionary standards that could be adopted by researchers worldwide, which is essentially the same recommendation made by the Gladstone AI report.
The report submitted to the State Department recommended the creation of an entirely new U.S. federal agency to control A.I. research, implementing draconian regulations such as caps on how much computer power can be used in any given A.I. system. The cap would be fairly close to the maximum power of computer systems today, essentially foreclosing technological development.
The new federal department would also keep a tight lid on A.I. code, criminalizing its distribution beyond the company that created it, and of course the new government computer cops.
The authors called for such intrusive government remedies because they feared the competitive race to develop AGI will make cutting-edge “frontier” companies reckless.
“Frontier A.I. labs face an intense and immediate incentive to scale their AI systems as fast as they can. They do not face an immediate incentive to invest in safety or security measures that do not deliver direct economic benefits, even though some do out of genuine concern,” the report said.
The tight controls on research were suggested because the authors worried that A.I. software might keep improving in quality until it transcends the limits of modern processors. One role of the proposed federal A.I. department would be pumping the brakes on software development to ensure the AGI genie doesn’t suddenly pop out of seemingly obsolete chipsets a few years from now.
These ideas would seem to run afoul of Altman’s warning about what China will do, even if America stops doing it. The Gladstone AI team was well aware of this, saying in pre-publication interviews that nothing would prevent artificial intelligence researchers from simply departing the United States to continue their work in less restrictive jurisdictions.