Pages

Thursday, December 11, 2025

Why OpenAI’s Story Shows the Ethical Limits of Capitalism – And Why We Need Prefigurative Institutions, Not Just Kinder Capitalists

Reading “Why OpenAI is a prime example of the ethical limits of capitalism” felt less like discovering something new and more like watching someone carefully name a pattern many of us have seen repeat across sectors about the ethical limits of capitalism.

Nikhil Venkatesh’s piece makes a simple but important point: you can’t throw a moral mission into a capitalist competitive environment and expect that mission to stay intact just because you meant well at the beginning. OpenAI’s trajectory—from idealistic non-profit to profit-driven “public benefit” corporation with multi-billion-dollar investors—isn’t a tragic accident. It’s a structural consequence. And that’s exactly the problem that keeps resurfacing in attempts to build ethical or socially grounded alternatives to dominant technological and institutional models.

The problem isn’t “bad founders.” It’s the rules of the game.

Venkatesh leans on two thinkers who rarely get put in the same sentence: Karl Marx and Milton Friedman. Both, in different ways, insist that firms operating in capitalist markets are structurally pushed toward profit-maximization, whatever the founders’ original intentions.

  • For Marx, this shows up as the “coercive laws of competition” : if one firm cuts ethical corners, takes more risk, or grabs more investment to grow faster, others either adapt or die.
  • For Friedman, the logic is internal: managers who don’t put shareholder returns first will simply be replaced by those who do. His shareholder doctrine codified this as a kind of moral duty within neoliberal capitalism.

The uncomfortable implication is that you cannot rely on a firm’s founding mission to restrain it indefinitely. Even an organization created to “benefit all of humanity” will, under enough pressure, become something else. We’ve seen this over and over: platforms that began as tools for connection become ad-targeting engines and surveillance infrastructure ; “sharing economy” apps become gig-work precarity machines . OpenAI is just the latest, more visibly existential example.

Why this hits so close to home

Anyone who has ever attempted to build mission-driven or community-oriented institutions will recognize the bind. Projects that begin with ethical intentions—whether in education, technology, or community development—eventually face the same pressures:

  • the need for funding;
  • the temptation (or necessity) of commercial partnerships;
  • the competitive logic that punishes institutions that refuse to scale, accelerate, or commercialize;
  • and the subtle drift from public mission to financial survival.

On paper, any socially motivated project may look like something that could attract investment or monetization in the future. And that is precisely where the OpenAI trap emerges: Take money now, promise ethics later. Venkatesh’s article is a useful warning: if you don’t design against these corrupting logics from the beginning, the “later” never comes. The structure eats the intention.

Prefiguration means designing the institution, not just the tool

Where I want to push the conversation further is this: it’s not enough to say “capitalism has ethical limits” and conclude that therefore we need regulation (we do) or “better people in charge” (we definitely don’t). More importantly, we need different institutional forms, which raises broader questions for anyone working on alternatives:

  • How do you create an initiative whose governance is shared between stakeholders, rather than captured by investors? Could multistakeholder governance models be adapted here?
  • How do you design revenue models that support public missions while avoiding total dependence on markets and financialization ?
  • How do you ensure that ethical constraints are built not only into the rhetoric but into the legal and institutional architecture of the project?
  • How do you safeguard public or community access in contexts where commercial pressures inevitably appear, especially when dealing with critical infrastructures like artificial intelligence and digital platforms?

These are prefigurative questions. They’re not about making capitalism nicer at the margins. They’re about building organizational structures that anticipate—and resist—the “coercive laws of competition” the article describes.

In practice, that might look like:

  • multi-stakeholder governance;
  • binding ethical frameworks that cannot simply be discarded at the next funding round;
  • protections around community access and non-commercial use;
  • structures designed to prevent extraction and enclosure of knowledge or data;
  • commitments to transparency and public accountability.

That’s what I mean by prefigurative democracy in this context: you don’t just build democratic tools; you build prefigurative institutions that are themselves constrained to act democratically, even when it becomes inconvenient.

The OpenAI parable and the politics of “good intentions”

One of the subtle but useful points in Venkatesh’s piece is that we don’t need to psychologize Sam Altman or anyone else to see the pattern. It’s tempting to tell the story as, “They started good and then sold out.” But the more honest—and more unsettling—reading is that they started good inside a structure that would eventually force a choice: compromise the mission, or lose the game. Unfortunately, most people, most of the time, choose “stay in the game.”

If we want a different outcome for AI, or for education, or for democracy, or for digital transformations more broadly, we can’t just insist on better founders. We have to refuse some of the games and design new ones. That is the space many alternatives are trying to open by pushing to create institutions that do not simply mirror the logics of the systems they are meant to challenge.

Beyond OpenAI: building the infrastructures we actually need

So yes: OpenAI is now a neat illustration of the ethical limits of capitalism. It’s also a reminder that we’re running out of time to build alternative infrastructures before the current ones are inescapable.

Projects that combine:

  • non-profit or cooperative education;
  • experimental governance or community democracy;
  • alternative economic experiments and solidarity economies ;
  • technologies designed to support inquiry rather than replace it;
  • and funding mechanisms for all of the above, and which keep all of the above accountable to communities rather than investors

These logics are not merely “interesting,” they are early prototypes of whether we can escape the fatalism that says: “In the end, the market always wins.”

If there’s a silver lining to OpenAI’s story, it’s that it’s happening early enough to serve as a warning. We can still choose to build technologies and institutions that prefigure the worlds we actually want to live in. But that will mean being as ambitious about how we design the funding and governance of our projects as we are about the tools themselves.


Written by Timothy Weldon, an anthropologist, social entrepreneur, and writer whose work explores democracy, capitalism, autonomy, and alternative education, with ongoing projects in Italy and across the global South.