Former OpenAI Board Member Reveals Why the Company (Temporarily) Fired CEO Sam Altman

OpenAI chief Sam Altman looking lost
Mike Coppola/Getty

Helen Toner, a former OpenAI board member who helped remove CEO Sam Altman in November before he was promptly rehired, recently shared insights into the events that led to Altman’s firing.

CNBC reports that in a recent episode of The TED AI Show podcast, Helen Toner, a former OpenAI board member, broke her silence regarding the circumstances that led to the ousting of CEO Sam Altman in November. Toner revealed that Altman had been withholding information, misrepresenting company happenings, and even lying to the board, making it difficult for them to fulfill their role of ensuring the company’s public good mission remained a priority.

OpenAI logo seen on screen with ChatGPT website displayed on mobile seen in this illustration in Brussels, Belgium, on December 12, 2022. (Photo by Jonathan Raa/NurPhoto via Getty Images)

OpenAI logo seen on screen with ChatGPT website displayed on mobile seen in this illustration in Brussels, Belgium, on December 12, 2022. (Photo by Jonathan Raa/NurPhoto via Getty Images)

OpenAI boss Sam Altman

OpenAI boss Sam Altman (Kevin Dietsch/Getty)

One striking example Toner provided was that the board was not informed in advance about the release of ChatGPT in November 2022 and instead found out about it on Twitter. Additionally, Altman failed to disclose to the board that he owned the OpenAI startup fund.

“The board is a nonprofit board that was set up explicitly for the purpose of making sure that the company’s public good mission was primary, was coming first — over profits, investor interests, and other things,” Toner explained. However, she stated that Altman’s actions over the years had made it nearly impossible for the board to carry out their duties effectively.

Toner also mentioned that Altman had provided inaccurate information about the company’s formal safety processes on multiple occasions. While Altman always had seemingly innocuous explanations for each incident, the cumulative effect led the four board members who fired him to conclude that they could no longer trust what Altman was telling them. “The end effect was that after years of this kind of thing, all four of us who fired him came to the conclusion that we just couldn’t believe things that Sam was telling us, and that’s just a completely unworkable place to be in as a board — especially a board that is supposed to be providing independent oversight over the company, not just helping the CEO to raise more money,” Toner said.

In October, a month before Altman’s removal, the board had conversations with two executives who shared their experiences with Altman, including screenshots and documentation of problematic interactions and mistruths. The executives described a toxic atmosphere and expressed their belief that Altman was not the right person to lead the company to artificial general intelligence (AGI). “The two of them suddenly started telling us … how they couldn’t trust him, about the toxic atmosphere he was creating,” Toner said. “They used the phrase ‘psychological abuse,’ telling us they didn’t think he was the right person to lead the company to AGI, telling us they had no belief that he could or would change.”

The decision to remove Altman prompted resignations and threats of resignations, including an open letter signed by nearly all OpenAI employees, as well as backlash from investors like Microsoft. Within a week, Altman was reinstated, and board members Toner and Tasha McCauley, who had voted to oust him, were removed from their positions.

The Wall Street Journal and other media outlets reported that while OpenAI co-founder Ilya Sutskever focused on ensuring that artificial intelligence would not harm humans, others, including Altman, were more eager to push ahead with delivering new technology.

In March, OpenAI announced its new board, which includes Altman, and the conclusion of an internal investigation by law firm WilmerHale into the events leading up to Altman’s ouster. OpenAI did not publish the WilmerHale investigation report but summarized its findings. “The review concluded there was a significant breakdown of trust between the prior board and Sam and Greg,” OpenAI board chair Bret Taylor said at the time, referring to president and co-founder Greg Brockman. The review also “concluded the board acted in good faith … [and] did not anticipate some of the instability that led afterwards,” Taylor added.

Read more at CNBC here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

COMMENTS

Please let us know if you're having issues with commenting.