Exclusive — Head of AI Safety Agency Could Facilitate ‘Government-Big Tech Collusion’ Like Coronavirus Pandemic, 2020 Election

Carly Tabak via NIST.gov
Carly Tabak via NIST.gov

A Senate Commerce Committee document exposes how the head of an AI safety agency at the Commerce Department could stifle competition, consolidate AI developments in the hands of a few big tech companies, and facilitate “government-Big Tech collusion seen during the pandemic and the 2020 election,” Breitbart News has learned exclusively.

The document, which was obtained by Breitbart News, details that in response to Biden’s October 2023 executive order on AI, Commerce Committee Secretary Gina Raimondo created the Artificial Intelligence Safety Institute (AISI) within the National Institute of Standards and Technology (NIST), to establish AI safety standards and guidance for the private sector and government agencies.

In April 2024, Raimondo announced that Paul Christiano would head the AI Safety Institute (AISI), and NIST employees threatened to resign over Christiano’s appointment because they feared his association with Effective Altruism and longtermism could “compromise” the institute’s objectivity and integrity.

Venture Beat reported in March 2024 that Christiano’s hiring was reportedly rushed through the hiring process without anyone knowing until he was chosen as the head of the agency.

Disgraced former FTX CEO Sam Bankman-Fried was an effective altruist, whose fraud and conspiracy led to the collapse of his digital currency exchange and $11 billion in estimated lost funds.

Venture Beat described effective altruism as:

… an “intellectual project using evidence and reason to figure out how to benefit others as much as possible” — has turned into a cult-like group of highly influential and wealthy adherents (made famous by FTX founder and jailbird Sam Bankman-Fried) whose paramount concern revolves around preventing a future AI catastrophe from destroying humanity. Critics of the EA focus on this existential risk, or “x-risk,” say it is happening to the detriment of a necessary focus on current, measurable AI risks — including bias, misinformation, high-risk applications and traditional cybersecurity.

The Senate Commerce Committee document also detailed how Christiano also believes that AI could result in a “10-20 percent chance of [an] AI takeover” that results in humanity perishing, and a “50-50 chance of doom shortly after you have AI systems that are human level.”

To prevent this apocalyptic vision from Christiano, he advocates for immediate AI regulations and enacting AI “policies that will often lead to temporary pauses or slowdowns in practice.”

In short, the document details, Christiano is an “AI Doomer who wants more government control over computer technology.”

The document also contends that Christiano’s proposal would lead to more Big Tech-government collusion:

In order for the federal government to assess AI alignment—as Christiano proposes—it must adopt sweeping powers and tools to determine what information is “safe.” Such tools have enormous potential to facilitate the same kind of government-Big Tech censorship collusion seen during the pandemic and 2020 election. Moreover, if the government requires new AI developers seeking to enter the marketplace to pass its safety assessments—which Big Tech and its allies, like Christiano, build—the government will not only enable regulatory capture, but also promote marketplace consolidation. [Emphasis added]

In October 2014, Christiano wrote that he considers the “welfare of individual future people to be nearly as valuable as the welfare of existing people, and consequently the collective welfare of future people to be substantially more important than the welfare of existing people.”

In 2018, he wrote that he would prefer to have AI development in the hands of a few players, saying, “So I agree that to the extent that we have a coordination problem amongst developers of AI, to the extent that the field is hard to reach agreements or regulate, as there are more and more actors, then almost equally prefer not to have a bunch of new actors.”

Christiano worked at the language model alignment team at OpenAI, which aims to make its AI platform, Chat GPT, “more truthful and less toxic.” Sam Altman, the CEO of OpenAI, said in February 2023 that the development of ChatGPT could one day “break capitalism.”

Christiano left OpenAI in 2021 to Open Philanthropy-funded nonprofits that aims to provide AI companies risk and safety assessments and write safety standards that AISI, which Christiano now leads, “relies upon to develop federal guidelines.”

He then served as a trustee for Anthropic’s “Long-Term Benefit Trust,” which manages the AI company, along wiht Jason Matheny, CEO of the RAND Corporation and a former senior Biden White House aide.

The Senate Commerce Committee document argues that as Congress considers legislation to codify AISI and “empower this new regulator to set national standards for the nascent AI industry, lawmakers should consider whether Christiano and Biden administration allies will use their newfound authority to champion alarmist policies that stifle competition, consolidate AI development into the hands of a few companies (including one Christiano personally helped to create), and further government censorship.”

Senate Commerce Committee Ranking Member Ted Cruz (R-TX) has also opened an investigation into big tech companies’ funding Biden administration staff salaries through an obscure program known as the Intergovernmental Personnel Act.

Memo Re P. Christiano – AISI (003) by Breitbart News on Scribd

Sean Moran is a policy reporter for Breitbart News. Follow him on Twitter @SeanMoran3.

COMMENTS

Please let us know if you're having issues with commenting.