Come to think of it, OpenAI CEO Samuel Altman looks like he has a really cool job and he’s having a lot of fun with it. And since OpenAI was made public in late 2022, as FT puts it, he’s been catapulted to the status of worldwide celeb.
So, what even is OpenAI? What is it for? Who is it catering to? Is it about making money after enabling high school kids to do their homework in 2 minutes? Or is it like an NGO looking to do good for the betterment of humanity pro bono?
Elon Musk is someone who’s miffed that OpenAI seems to have this identity crisis. Some may not know that OpenAI was originally founded as a non-profit dedicated to making sure “AI benefits humanity”. Even fewer might know that Elon Musk is named as a co-founder and an early backer of the AI goliath. Though, in 2018, citing disagreements over its direction and governance, Elon walked away.
Since then, OpenAI was said to have shifted to a capped-profit model with an LLC, limiting investor returns to 100x their initial investment. Elon was not too happy with that shift, which may have led to some lawsuits, which are still ongoing.
And there’s another potential change.
In May 2025, there was said to be a published letter to employees by Samuel that plans to make OpenAI a for-profit entity have been abandoned. Instead, word is that it’ll continue to be overseen and controlled by its non-profit board.
It’s said that OpenAI is restructuring to become something called a PBC (Public Benefit Corporation). This is an interesting kind of entity in the US – where one would have to balance the financial interests of shareholders while working towards a special public benefit. To some extent, most public-facing companies market themselves to do both. A traditional for-profit corporation might be legally bound to maximize shareholder value, but a PBC might rein that in. That special public benefit, in OpenAI’s case, might be promoting open access to AI or ensuring that AGI “benefits all of humanity”.
All the things a for-profit corporation would do, so would OpenAI, but it seems like OpenAI would formally commit itself to goals that would benefit the masses. That sounds cool. Of course, you might wonder whether an AI goliath could build AGI to benefit you, the people, while still being able to cater to the expectations of marquee investors. To some extent, some might even consider it a move to get back in the graces of the cynical and critical. The PBC label might be about placating those critics who feared a total pivot to pure corn-fed capitalism powered by a powerful AI god.
OpenAI, when founded in 2015, seemed to be launched with a messianic-esque mission: save the world using AI, since AI can outperform humans at any intellectual task. That’s what Person Of Interest let us know. The idea may have been that AI research would be conducted transparently and without commercial incentives in a way that wouldn’t tempt any potential AI founders to cut corners or not have public accountability.
And if AI were to be developed irresponsibly, it would be catastrophic, kind of like what happened with Enthiran. The most doomsday-ish scenario would be The Terminator. So, these tech visionaries banded together to build an AI company, while still finding time to ask Elon Musk about his favourite video games in Y Combinator videos.
Of course, though OpenAI has “open” in its name, it didn’t seem to be open-source. Maybe, it’s DeepSeek’s prominence, maybe it’s a myriad of other factors, but there are rumblings that OpenAI might release an open-source model soon.
When ChatGPT was released to the public, it seemed to capture the public imagination. OpenAI – which you might boil down to being a research lab – was cool and received public adulation, whilst receiving some brickbats as well. All these big entities, like Microsoft and SoftBank, had been profligately injecting a lot of money into OpenAI while figuring out ways to integrate the LLMs into their processes.
It might have been an interesting dilemma for the OpenAI team: could they grow without selling their heart and not just their craft? Would its founding mission be sacrificed on the “altar of capitalism”, as is often said? That rumination might have been fast-tracked when an unlikely coalition of public figures, visionaries and AI “ethicists” came together to sign something that demanded that AI not get too developed or too smart.
Plus, a couple of other things took place. Samuel was briefly no longer CEO. A couple of copyright lawsuits against AI were on the rise. Hollywood briefly shut down worrying about what AI could mean for those creative jobs. Ghibli was worried about a studio that didn’t belong to him. Maybe, all of that led to OpenAI looking to go the direction it is now.
At this point, where OpenAI seems to be where it is now, could a single new shiny structure serve 2 masters? There might be some instances of PBCs in the US. Kickstarter – a name you might keep hearing on Shark Tank US – is one of them. For OpenAI, it might be a bit more complicated: what goal would get prioritized if two of them conflict? Could profit have the kind of gravitational pull that might dilute the importance of “the benefits of humanity to all”? That’s the age-old capitalism question.
Maybe, even if there’s a PBC, the spoils for OpenAI backers might still be relatively appealing. It pays to be prescient. Because it seems like that capped-profit model is gone, like the Joker’s pencil. If this is just a normal cap structure, that might be investor code for exponential upside.
Currently, OpenAI might be seen more as product. If it ends up being the infrastructure that undergirds economies, congrats to all those who saw the vision beforehand and those who struck when the iron was hot. The vault is unlocked. Looks like investors might not be shortchanged, after all… right?
But, how much would they be able to actually steer this ship? As investors, would they be happy if there are no underappreciated voting rights or blocking rights? Got to focus on that humanity angle. So, if there’s a commercially viable way to deploy AGI and if there are ethical concerns, the team might have to shut down that avenue, even if billions in potential revenue are wiped out. Could that, also, mean if Samuel is removed by the Board, shareholder approval might not be needed? What rights might remain?
And what about exit paths? Is there an IPO plan in mind? Or was SPAC an unspoken dream? Maybe, those traditional exit windows might be sealed shut, so OpenAI investors might not be able to dream those out. Or if those exits are even possible, would it be the same as that of a traditional tech company?
Because “benefitting humanity” is not your traditional aspirational KPI. Maybe, they can try to morally influence or try to persuade or refuse to offer any more follow-on capital. But, can they control? Was control the base desire? Maybe, rich and righteous might collude to form a new superpower.
But, hey, they get to tell people they have access to one of the most dominant AI infrastructures on the planet.
You might look at OpenAI’s latest move as a triumph. Some might see it as some kind of neutering of what it originally was. Compare Daredevil in his Netflix Marvel show vs his Disney+ show in Season 1. Still powered, still super, but something’s probably missing.
Will the next investor look at an OpenAI-esque startup as your classic AI play or will they have to change their role AI model?