What OpenAI’s momentary meltdown tells us about the AI industry

Photo Courtesy: Jernej Furmam (via Wikimedia Commons)

Just over two weeks ago, OpenAI - Silicon Valley’s golden start-up - underwent a weekend-long meltdown. The aftermath has raised concerns regarding the pathway artificial intelligence seems to be taking. Ultimately, it illustrated how the scarcity of AI talents gives these engineers huge leverage power and further control over the pace of AI developments. In the meantime, we (the rest of the population) are expected to sit back, adapt, and await the consequences.

A chaotic weekend

So what exactly happened Friday through Monday? Almost a year after the launch of Chat GPT-3, CEO of OpenAI Sam Altman was ousted by the board of the organisation. The announcement was made in an official statement on November 17, which justified his “departure” on the grounds that “he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.”

Despite the absence of a clear explanation, we know that board members Adam D’Angelo, Ilya Sutskever, Tasha McCauley, and Helen Toner were the decision-makers and had all expressed concern for the disregard of AI safety. When joining the board in 2018, D’Angelo stated that this was “both important and under-appreciated”, while Toner put herself on the radar by publishing an academic article on the flaws of OpenAI with regards to safety.

The events that followed Altman’s exit are a blur of madness for many. As soon as the news broke, co-founder Greg Brockman quit in solidarity and CFO Mira Murati was appointed interim CEO. On Sunday night, she was followed by former CEO of Twitch Emmett Shear, who took his seat as third CEO in three days. 

On Monday, Microsoft, the companies’ biggest shareholder, then announced it was hiring both Altman and Brockman to lead its own advanced research lab, sending shockwaves throughout the industry, and leading over 730 of OpenAI’s employees to sign a letter threatening to leave and join their former CEO on the project if the board did not resign and reinstate both of them. 

Finally, this gave Altman ultimate leverage and, coupled with his great influence in Silicon Valley regarding investors and co-workers, granted his safe return as OpenAI’s chief executive. He rejoined the company with Brockman by his side, together with a completely new board, including a nonvoting seat for Microsoft.  

Despite this return to order, however, there is nevertheless the feeling that something remains unresolved, the events having conveyed a sense of uneasiness. A few days later, it was revealed by Reuters that, prior to all these events, the board had received a message from several staff researchers warning of a powerful AI discovery, concerning an internal project referred to as Q*. In essence, this project represents the first step towards their models’ ability to reason on its own - a serious leap in AI history, and potentially a factor in Sam Altman’s sacking. 

A manifestation of ideological differences

Either way, the unfolding of events illustrates a deep, ideological divide within OpenAI. The company’s governance crisis seems to have been the result of ongoing contention between two groups: those who believe AI poses important threats to humanity, and those who downplay those threats for the sake of technological progress.

On the one hand, the group that advocates for regulation is largely influenced by a philosophical current called effective altruism. They believe that AI left unregulated is a serious cause for concern, one that could threaten humanity. Somewhat unsurprisingly, the three aforementioned board members (Toner, McCauley, D’Angelo) have deep ties to effective altruism.

It is important to remember that the responsibility of the board was to ensure OpenAI was creating artificial intelligence that “benefits all of humanity”.  Essentially, they were tasked with making sure the non-profit side of the company was prioritised, and that their model progressed at an adequate pace with society. Besides it being absurd that this task was given to six workers in Big Tech, including the CEO himself, at least the previous board was composed of individuals driven by important ethical concerns. 

On the other hand is the group that seem to be the winners of last week’s feud, endorsers of “effective accelerationism”, who preach for further development of AI’s capacities. It is precisely this drive for rapid development that has been guiding Altman for over a year. As competition grew, he advocated for the expedited launch of Chat GPT-3, as a way of gaining advanced knowledge on how users interacted with AI. Since then, a multitude of new products and the push towards commercialisation have been a top priority.

Indeed, the company was started as a non-profit organisation whose initial goal was to “benefit humanity as a whole, unconstrained by a need to generate financial return”. Since 2019, however, it has shifted to a capped-profit model which has clearly taken precedence. Former Open AI employees told The Atlantic that “After Chat GPT, there was a clear path to revenue and profit. You could no longer make a case for being an idealistic research lab. There were customers looking to be served here and now.”

Hence the importance of having a board whose focus lies on regulation and the prioritisation of humanity, over business and fast-paced progress. Instead, OpenAI’s recent meltdown will have pushed the company to be more business-minded, as well as having enforced loyalty and commitment to Altman.

Power holders

The future of such models now undoubtedly lies in the hands of individuals such as Altman and OpenAI’s programmers. The scarcity of skilled AI workers has made them incredibly valuable in an industry that is expanding faster than the number of workers, with the result that they have more power than ever. As OpenAI employees threatened to quit, Microsoft and other tech firms jumped at the opportunity to take them in, making the board utterly powerless. At least for the time being, this is an industry in which workers are non-interchangeable, and leaders of such projects cannot be replaced - a prospect which is particularly disturbing given their profession consists in creating a tool that has the potential to erode so many other professions.

As much as artificial intelligence can become an incredibly valuable tool that everyone can use to their advantage, it is strange to think that such a small group of people are developing a world-altering device. Firstly, because this development is now happening behind closed doors. And secondly, because it is being produced in part with a profit-maximising goal. Can we rely on these corporate structures to lead the advancement of and investment in AI and do what’s best for humanity?