Most conversations about AI focus on its capabilities — what it can already do or what it's about to learn (and whose jobs it's about to replace). But beneath the surface, a deeper structural shift is taking shape: a control structure where a handful of companies lock down the entire stack, from chips through clouds and models to the user interface.
This self-reinforcing concentration is neither accident nor side effect. The more compute, the better the models; the better the models, the more users; the more users, the more data and compute for the next cycle. This flywheel, over time, hardens into a monopoly.
But at the same time, concentration breeds its own counterweights: companies that fell behind, sanctioned nations, and independent communities begin building alternative architectures. The question then is not whether there will be order, but whose order it will be: designed from above or grown from below.
The Open Stack
The Greeks called designed order taxis and emergent order cosmos. AI today is taxis — controlled from above. The question is whether it can become cosmos, and that depends on decentralizing every layer of the stack, each with its own barriers and unexpected allies.
Cosmos in practice is a set of engineering approaches, each diluting centralized control at a different level of the stack. Starting with data.
Data. One such approach, federated learning, essentially inverts the conventional logic of working with data: instead of data being sent to the model — to the provider's servers for further opaque use — the model "comes" to the data, which by default never leaves the user's device.
Training also happens on user devices; only parameter updates are transmitted. Google and Apple already use this in production. The scale is more modest than centralized training, communication is more complex, and the gap to the frontier is noticeable. But the principle works: LLMs can be trained without turning users into free raw material.
Compute. Distributing data is an entirely solvable problem. Distributing computing is a far greater challenge. Yet even here, the notion that a serious model necessarily requires a massive data center is no longer unassailable. In 2024-2025, Prime Intellect trained a 10-billion-parameter model across five countries and three continents. The scale of frontier models is still a long way off, but distributed training is moving into practical territory and advancing.
Knowledge. Knowledge accumulates across model generations through distillation, synthetic data, fine-tuning on competitors' outputs — but chaotically and wastefully: each company spends resources reverse-engineering what rivals have already built. Decentralized networks propose to formalize this exchange with transparent rules of participation and clear compensation for contributors.
Openness. The tighter the leaders lock down the ecosystem, the stronger the incentive for laggards to seek a different path. Meta released Llama as open source once it became clear that catching up with OpenAI under a closed model was unrealistic: open code turned out to be a more effective competitive strategy. China arrived at the same conclusion after export restrictions cut it off from Western infrastructure and, paradoxically, is now one of the world's leading suppliers of open-source models. Cosmos in AI is born from the pragmatism of those for whom taxis left no room, and that is precisely what makes it resilient.
When DeepSeek published its weights and training details in January 2025, dozens of teams worldwide leaped forward within weeks. Most centralized players would never dare do this; it's a direct blow to their competitive advantage. Competition simultaneously drives progress and constrains it.
Incentives. Open source has its own structural weakness: without sustainable funding, maintainer commitment depends on enthusiasm that rarely lasts or the goodwill of sponsoring companies. The crypto industry has spent recent years testing decentralized protocols with built-in incentives for participants, and these potentially align well with decentralized AI. Some crypto projects are already actively building in this direction.
None of these solutions yet matches the scale of centralized infrastructure. But together, they outline a model in which knowledge ceases to be a resource locked inside a single infrastructure. The remaining question is what these alternatives will run on.
Hardware Sovereignty
Compute sovereignty is impossible without energy sovereignty. Any conversation about control over AI can't be separated from the question of who controls electricity and chips.
The real bottleneck of the AI industry turned out to be not algorithmic but energy-related. A single prompt to ChatGPT consumes ten times more energy than a Google search, and there are billions of such prompts every day. According to Deloitte's forecast, by 2030 data centers will double their consumption, and the primary driver of demand growth is no longer cloud services or streaming but AI.
Whoever controls electricity controls scaling. That's why Big Tech is moving from merely buying energy to owning power plants, adding a base energy layer to its control structure.
Geographic concentration deepens the dependency. Over 90% of advanced chips are manufactured by TSMC in Taiwan; EUV lithography comes from a single company, ASML in the Netherlands. GPU export restrictions have already become a tool of geopolitics.
One earthquake in Taiwan or one decision in Washington could freeze AI development across entire continents. Per a 2025 SemiAnalysis submission to the U.S. government, a TSMC shutdown would drop global AI chip shipments to zero, and the U.S. would need at least five years to build replacement manufacturing capacity.
Access to AI becomes a function of supply chains and politics, not just talent or demand. That fragility is exactly what makes thousands of idle GPUs around the world suddenly relevant: Kazakhstan, Vietnam, and the UAE have capacity without a market to put it to work.
On the other hand, today we have living proof that a distributed model can operate at scale: the Bitcoin network. Without a central authority or active management, purely through built-in incentives, Bitcoin has mobilized distributed infrastructure with a capacity estimated at roughly 20 GW, more than any tech giant today. And the efficiency of mining ASICs has improved several hundred thousand-fold over a decade and a half.
This is what markets can do: mobilize distributed capacity into a global organism through incentives alone, no central authority required. Distributed computing isn't theoretical — Bitcoin has already laid its infrastructure groundwork.
Distributed compute marketplaces are already connecting these idle resources, offering GPUs at prices 50-85% below centralized clouds. Yet distributed infrastructure trades efficiency for resilience. Coordination between nodes, latency, training losses: for frontier models this cost remains prohibitive, but for inference and fine-tuning it already works.
The hardware itself is evolving in ways that favor distribution. New chips that process data directly in memory are six to ten times more energy-efficient than GPUs and are designed primarily for laptops and edge devices. The less energy a computation requires, the more devices can participate in a distributed network and the fewer reasons remain to treat centralization as the only viable model.
Today, control over energy and chips is taxis by default. But incentives, marketplaces, and hardware evolution are already assembling an alternative: cosmos, growing from below.
The Hardest Layer: Governance
Who controls what AI knows? Who decides whom it serves? Who is accountable when something goes wrong? In a centralized model, the answer to all these questions is the infrastructure owner. But distributing control demands its own infrastructure of trust: who holds the data, who sets the rules, who answers for the outcome.
Three questions define this layer: security (who protects the data), ethics (whose values are encoded), and accountability (who answers when things fail).
Who decides
Self-sovereign identity places the individual at the center: data stays with its owner, and platforms receive only temporary, revocable access through cryptographic credentials. Trusted execution environments and cryptographic proofs already allow models to learn from data without seeing it in plaintext.
The next natural step is Data DAOs, whose participants collectively manage data pools and vote on the terms of access. Revenue is distributed via smart contracts, and consent can be revoked at any time.
Who shapes the worldview
A monopoly on infrastructure sooner or later becomes a monopoly on narratives. A handful of models from a single jurisdiction mediate an ever-growing share of information consumption, and the world drifts toward intellectual monoculture: one implicit worldview, one narrative kill switch.
Decentralization doesn't eliminate bias; that's impossible. But it distributes it. As with a free press, many models with different biases create an environment that reduces the potential for any single bias to achieve total dominance. Not a guarantee of truth, but the kill switch is no longer in one pair of hands.
Who is accountable
Today, autonomous AI agents already handle real money: they own their own wallets and independently execute contracts on the blockchain.
Many AI experts call 2026 the year of agents; more cautious forecasts say 2027. Either way, with the imminent mass adoption of AI agents, the question of accountability for their actions ceases to be theoretical.
One proposed answer is Know Your Agent: a verifiable agent identity with defined authority boundaries and a reputation history.
But Know Your Agent is identification, not adjudication. Decentralization doesn't resolve the question of accountability; it reformats it. Distributing bias means stripping any single party of a monopoly on the narrative. Distributing accountability means risking that no one ends up bearing it.
Governance from below is perhaps the hardest part of the equation. On the other hand, this is precisely where cosmos can do more than merely replicate the mechanisms of taxis: it can offer ones that are impossible within taxis by definition. Blockchain protocols have been experimenting with models of distributed governance for over a decade, and AI governance doesn't start from scratch.
"'AI becomes the government' is dystopian: it leads to slop when AI is weak, and is doom-maximizing once AI becomes strong. But AI used well can be empowering, and push the frontier of democratic / decentralized modes of governance." — Vitalik Buterin on X
From Consumption to Participation
Taxis and cosmos are poles of a single spectrum. Markets grow from below but are regulated from above; democracies rest on voting but operate through institutions. AI will follow the same path. What balance society is willing to accept is not merely a technical question: technology, energy, and governance are too deeply intertwined to be solved in isolation. Distributed knowledge is meaningless without distributed infrastructure, and that infrastructure is meaningless without distributed governance.
In this equation, the role of the individual changes, too. The centralized model implies consumption on someone else's terms: someone else's data, someone else's rules, someone else's training priorities. A distributed architecture enables controlling what a model learns from, within what boundaries an agent operates, and to whom it is accountable. This is a shift from consuming AI to participating in it.
So far, none of the alternatives described have matured to the scale and quality of centralized platforms. But all of these solutions work, and each one narrows the territory where centralization is the only option. And the decisions that will set the trajectory for years to come are being made right now, in protocol architecture, in openness standards, in energy policy, long before most people even consider the possibility that an alternative exists.