Last week’s failed coup at ChatGPT developer, OpenAI, was one of the most consequential business events in decades.
It was the death of any chance of the self-regulation of artificial intelligence by the companies developing it, and was a big setback for “Effective Altruism” (EA), at least in the corporate world.
The EA movement began about 12 years ago, based largely on the ideas of philosopher Peter Singer and in particular his argument that people should spend their time and resources saving as many lives as possible. It’s related to the theory of utilitarianism, which says maximising the net good in the world should be the purpose of life.
In 2015, EA took corporate shape in the form of OpenAI Inc, a non-profit philanthropic charity created by a group of effective altruists – Sam Altman, Elon Musk, Greg Brockman, Reid Hoffman, Jessica Livingston and Peter Thiel – with a mission “of building safe and beneficial artificial general intelligence for the benefit of humanity” as they put it.
The mission statement, still on its website, goes on to say: “Seeing no clear path in the public sector … we decided to pursue this project through private means bound by strong commitments to the public good.”
OpenAI’s board was made up of Effective Altruism devotees, and they collected a bunch of engineers who also believed in AI for the good of mankind. In November last year, eight years on from the start, they launched the result of their endeavours – ChatGPT – and it’s fair to say that changed the world.
In May this year a bunch of AI scientists and academics signed a “Statement on AI Risk” which was pure Effective Altruis. It said: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
It got a lot of attention for a few days, and then it didn’t, and the signatories went back to coding and teaching AI.
Fourth from the top of the list was, and still is, Sam Altman, the sacked and now reinstated CEO of OpenAI.
Number 10 on the list is Ilya Sutskever, a director of OpenAI, and the guy who told Sam Altman he was sacked, and there are 28 others from OpenAI on it as well. But there’s none from Google, Apple, Amazon, Nvidia or Facebook; there are seven signatories from Microsoft, but not CEO Satya Nadella.
Weird non-profit charity
The OpenAI non-profit start-up asked for donations to begin with, but only got $130.5 million which wasn’t enough. So they created a for-profit subsidiary called OpenAI Global LLC, which could raise capital, and did – $3 billion in four rounds, the latest at a valuation of $US86 billion, based on a revenue forecast for 2024 of $US1 billion.
So this is a very weird non-profit charity.
Microsoft agreed to invest $US10 billion in January this year, although there was a story last week that only a fraction of that money has been sent to OpenAI and most of that has come in the form of services.
Briefly, the sequence of events in the past two weeks is that Sam Altman was fired by the board of the non-profit OpenAI Inc because of “a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.”
Fired, then hired again: Sam Altman, CEO of OpenAI which created ChatGPT, was among signatories warning about AI dangers. Photo: Getty
Sam was then hired by Microsoft, along with Greg Brockman, and Microsoft CEO Satya Nadella, said everybody else at OpenAI could come too.
Ninety per cent of the staff at OpenAI signed a letter to the board demanding Sam’s return and threatening to go and work at Microsoft with him, so it looked like Microsoft would effectively get OpenAI for nothing, and then Sam said he’d go back if the board went (“it’s them or me”) and he won. The board’s gone and Sam’s back.
A humiliated Ilya Sutskever even put out a statement that he “deeply regretted” his actions.
In the middle of all this, the staff wrote a letter to the board that included this statement: “You also informed the leadership team that allowing the company to be destroyed would be consistent with the mission.”
And there lies the nub of the problem.
If you’re running an Effective Altruism charity for the benefit of mankind, and you think that maybe what you’re doing is no longer benefitting mankind, and might even be risking the extinction of mankind, you probably should stop doing it.
But the problem is that none of the people worrying about the risks of AI have any idea what to do about those risks apart from stopping doing it, which is not going to happen, if only because a lot more companies than OpenAI are doing it as well.
And anyway, the people working on ChatGPT don’t want to stop either: They’re looking forward to getting rich.
Signing a statement that AI might make humans extinct like nuclear bombs and pandemics was overreach. Is ChatGPT going to extinct us? Hardly.
What the worriers are talking about is that if ChatGPT and other AI chatbots keep being developed and keep getting smarter then maybe one day they will go self-aware like Skynet or the Matrix and kill us all, which to most people seems far enough off not to worry about.
Victory for capitalists
The reverse OpenAI coup was a victory for the capitalists over the philanthropists, the money-makers over the altruists, and the gung-ho over the worried.
The actual clear and present danger is that AI is controlled by the same cartel of big companies that control everything else in technology.
Frank McCourt, who used to own the Los Angeles Dodgers baseball team, is leading the resistance with an organisation called Project Liberty. He said this week: “Basically five companies have all the data. Large language models require massive amounts of data. If we don’t make changes here, the game is over … Only these same platforms will prevail.”
The manifesto of Project Liberty says: “Big tech and social media giants are inflicting profound damage on our society.” OpenAI is now one of them.
The only way Frank McCourt and his fellow campaigners against the power of the big tech cartel won’t end up like the Effective Altruists who tried and failed to control ChatGPT is if governments get involved.
That’s because only governments can effectively regulate; there has to be a “clear path in the public sector” or it won’t happen.
In capitalism, money usually gets in the way of altruism.
Alan Kohler writes twice a week for The New Daily. He is finance presenter on the ABC News and also writes for Intelligent Investor.