OpenAI is racing to raise up to $100 billion and go public in 2026 with a potential $1 trillion valuation. At the same time, the company is proposing taxes on AI profits and public wealth funds to help people who lose their jobs to AI. If that sounds contradictory, you’re paying attention.
This is the strange dance happening right now in Silicon Valley. The same companies building AI systems that could displace millions of workers are also drafting policy proposals for how to soften the blow. OpenAI CEO Sam Altman has released recommendations that include creating public wealth funds to give every citizen a stake in AI-driven economic growth, regardless of whether they have a job.
The Trillion-Dollar Question
Let’s start with the numbers. OpenAI is preparing for one of the biggest initial public offerings in history. Reports suggest the company could be valued at up to $1 trillion when it goes public toward the end of 2026. To get there, it’s currently trying to raise up to $100 billion, with some estimates putting a potential valuation at $830 billion even before the IPO.
That’s an astronomical amount of money for a company that was founded as a nonprofit research lab less than a decade ago. The scale of these numbers tells you everything about how seriously investors are taking AI’s potential to reshape the economy.
The Robot Tax Proposal
Here’s where things get interesting. Altman and OpenAI aren’t just building AI systems and walking away. They’re actively proposing a framework for how society should handle the economic disruption that’s coming. The centerpiece is a tax on AI profits that would fund public wealth funds.
Think of it like this: as AI systems become more capable and companies make more money from automation, a portion of those profits would go into a fund that distributes money to citizens. Everyone gets a slice of the AI economy, even if they’re not working in tech or if their job has been automated away.
The proposal also includes expanded safety nets to address job loss and inequality. This isn’t just about giving people money. It’s about fundamentally rethinking how we distribute wealth in an economy where human labor might become less central.
Why This Matters for Regular People
If you’re not a tech worker, you might be wondering why you should care about OpenAI’s corporate structure or policy proposals. Here’s why: the decisions being made right now will determine whether AI makes your life better or harder.
The optimistic scenario is that AI handles tedious work, productivity soars, and we all benefit through shorter workweeks and better living standards. The pessimistic scenario is that AI concentrates wealth in fewer hands, eliminates jobs faster than new ones are created, and leaves most people worse off.
OpenAI’s proposals are an attempt to engineer the first scenario. Whether they’ll work is another question entirely.
The Skeptic’s View
There’s something almost absurd about a company raising $100 billion to build AI systems that might eliminate jobs, then proposing taxes on itself to help the people it displaces. It’s like a tobacco company funding lung cancer research.
Critics might argue that if OpenAI is genuinely concerned about inequality, it could slow down development, share its technology more widely, or structure itself differently from the start. Instead, the company is racing ahead at full speed and asking governments to clean up the mess later.
There’s also the practical question of whether governments will actually implement these policies. Proposing a robot tax is easy. Getting it passed through Congress or Parliament is another matter entirely, especially when tech companies have armies of lobbyists working to protect their interests.
What Happens Next
OpenAI’s path to a potential $1 trillion valuation will be one of the biggest business stories of the next few years. But the more important story is whether the company’s policy proposals gain traction. Will other countries and companies adopt similar frameworks? Will governments act before the job displacement becomes severe?
The tension between building transformative AI and managing its social impact isn’t going away. If anything, it’s going to get more intense as AI systems become more capable. OpenAI’s proposals are one attempt to square this circle. Whether they’re sufficient, sincere, or even feasible remains an open question that we’ll all be living with the answer to.
🕒 Published: