Sam Altman was fired from OpenAI on a Friday. Sam Altman got a job at Microsoft on Sunday. Sam Altman got back his old job at OpenAI the next Tuesday.

Between Friday and Tuesday, OpenAI had two CEOs. The bizarre developments at the company left many wondering whether they could trust the platform it is developing. The reason given by OpenAI for Altman’s dismissal was that it was a result of a deliberative review process by the board, which concluded that he was not consistently candid in his communications with them, hindering its ability to exercise its responsibilities.

As a result of this, the board stated that it no longer had the confidence in his ability to continue leading OpenAI. It appears that the OpenAI saga was due to differences of opinions whether it should be a non-profit organisation or a commercial enterprise.

OpenAI started in 2015 as a non-profit with the objective to research AI and figure out how to make it safe to put some guardrails to prevent its irresponsible development.

Elon Musk was an early investor. Elon Musk being Elon Musk walked away from the company after a couple of years apparently due to differences over the direction the company was taking. Altman realised that the future were these large language models — the technology behind ChatGPT. This technology was huge, required loads of computing power and was highly capital intensive. In short, the technology required regular infusion of lots of funding.

Model tweak

This prompted Altman to tweak the business model of OpenAI which included a for-profit engine which could earn some revenues which in turn would ensure that the funding window never dries up. The plug-ins that accompanied ChatGPT 4 were the funding engines. The rift between Altman and the Board came to a boiling point when he proposed a developer day to encourage development of a larger number of plug-ins.

Interestingly, neither Altman nor any others had any investments in OpenAI. We are not aware if there was any lucrative stock compensation scheme which is a common practice in Silicon Valley. With no skin in the game, Altman was free to opt for any employment in the field of AI.

Globally, the response to AI ranges from optimism to caution. There is no doubt that AI can do some routine and mundane tasks. AI can also generate responses to questions depending on the quality and quantity of data it is trained on.

Needless to say, that if the data is incorrect, the response will also be incorrect. Globally, there appears to be a consensus that AI should be regulated, and it appears only a matter of time before regulations are enacted. Once AI becomes regulated, entities in the AI space would opt for the for-profit model.

Most of the regulations would come with rules and conditions. Flouting them would involve penal action including monetary penalties. If entities are penalised, they would prefer to pay these penalties from profits and not from grants or donations.

Employee Activism?

Another interesting aspect of the OpenAI saga was the letter signed by a majority of its employees showing no faith in the Board that remained after Altman was fired. Shareholder activism — an unheard-of concept about a decade ago — is prevalent in most countries today.

Could the developments at OpenAI be the beginning of employee activism in other entities? It would probably depend on the seriousness of the actions of the management. There will be no dearth of action in the OpenAI space — both in terms of the technology and the happenings in entities that develop it.

The writer is a chartered accountant

comment COMMENT NOW