Who decides what ChatGPT tells you? And why should you care? Experts from the AI and blockchain space weigh in on the risks of centralization and why decentralized AI matters.
Bubble or not, AI keeps weaving deeper into our lives, supplementing and increasingly replacing search engines, therapists, senior colleagues, consultants, and everything in between. It's becoming a tool we can barely work without. And through all these interactions, we share more with it than we've ever shared with anyone, trusting it more than any other source. Yet that trust effectively goes to two or three commercial companies with specific jurisdictions, clear accountability structures, and one overriding motive: quarterly stock price growth.
The Real Risk Isn't Robot Rebellion
This concentration of influence is what many industry experts see as the real danger. Once the fears about AI replacing us all, or outright enslaving us, subside, we may find ourselves in a reality where a far more mundane threat has materialized: a handful of people in a couple of jurisdictions increasingly dictate what we read and watch, what we buy, what we accept as fact, what we think, and even which decisions we're inclined to make.
Alexander Rugaev, serial entrepreneur and VC expert, describes how systemic this problem is:
"When a handful of tech platforms own the core models, data centers and distribution channels, their bugs, biases and commercial incentives scale to everyone. That creates monopolistic control over innovation, access and narratives: a small group can quietly decide which tools exist, which voices are amplified or downranked, and how history and culture are filtered through a single 'default' worldview."
This isn't abstract power. Beyond the commercial influence over which products and marketplaces get recommended, it's a very concrete control over what information the model outputs, which narratives it amplifies, and which it mutes. In today's world, it's also a geopolitical asset for shaping public opinion and eroding other countries' digital sovereignty.
Denis Smirnov, researcher at DAO Builders, sees this as a threat of cognitive monoculture: "The core risk is the concentration of cognitive power. We're talking about a tiny group deciding which models get built, what data shapes them, and what goals they serve. This isn't just a market monopoly. Technically, it's a single point of failure. Politically, it's a monopoly on how we frame reality, the ultimate lever for controlling the world's thinking infrastructure."
It doesn't even have to be explicit censorship. It can work as a well-intentioned "invisible filter." When a single system processes prompts from hundreds of millions of people, its filters become an invisible, soft editor of reality for each one of them.
Vlad Pivnev, CEO of ICODA, offers a concrete example: "Journalists testing the Chinese chatbot DeepSeek found it would refuse to answer questions about certain historical events or criticisms of leadership, stating it was 'beyond its competence,' while freely discussing similar topics about other countries. This illustrates the core risk: censorship and control of information. In a centralized system, the state, the corporate owner, or any other controlling entity can programmatically decide what facts, ideas, or narratives are permissible."
Compute-to-Data, Not Data-to-Cloud
As long as user data feeds other people's models, it remains a commodity. The question is how to build an architecture where users become full participants, not products.
"The most robust structure must be built on a simple principle: individuals retain ownership of their data," says Ed Musinschi, co-founder of Safe AGI Alliance. "Technologically, we need a strong 'compute-to-data' paradigm, sending the model to the data via federated learning, not the other way around. This must be underpinned by transparent, on-chain governance for any shared datasets, with auditable access logs."
In this approach, instead of users sending raw data to the cloud, the model comes to the data. It learns from patterns without ever seeing the source material.
"Forget the outdated idea of data as an asset you 'sell.' In a functioning decentralized system, data should be a stream governed by revocable permissions. You keep your raw data in your own sovereign space. Then, you grant AI models granular, time-bound licenses: 'You can infer from this for a week,' or 'You can learn from this pattern for a month.' Each right is separate and revocable." (Denis Smirnov, DAO Builders)
For a distributed, trustless accounting layer, there's blockchain. It records who used what data, when, and under what terms. Instead of trusting institutions, you get mathematical guarantees that the system's rules are being followed.
"Blockchains track who used what data, when, and under which license," explains Alexander Rugaev. "Smart contracts can enforce usage terms automatically, trigger micropayments when your data contributes to training, maintain a tamper-evident history of data provenance and consent. This turns data access from a one-time extraction into an ongoing, auditable relationship."
Who Actually Benefits From Decentralization?
The decentralized alternative would benefit essentially everyone outside the top five companies — and any country that isn't the USA or China. But in reality, the answer runs deeper.
The first beneficiaries are ordinary users. "They regain access to uncensored, verifiable information and a tool they can truly trust," says Vlad Pivnev.
It's not just about individual users, though. Ed Musinschi identifies three levels: individuals, communities, and autonomous digital entities. Each gains something different: individuals get sovereignty over their data, autonomous agents become full participants in the ecosystem, and communities gain the power to manage shared resources. Health systems can train diagnostic models, guilds can build domain-specific assistants, cities can develop AI for infrastructure. All without handing control to a vendor in another country.
This outcome isn't automatic, however. "We get these benefits only if we bake them in from the start," notes Genadi Yanpolskiy, blockchain and AI advisor and angel investor. "That means enforceable data rights, real mechanisms for fair compensation (like data dividends or community funds), and governance models where your voice isn't just a token vote."
Decentralized AI projects don't need to "beat" Google or OpenAI. Creating a viable alternative is enough. Over time, the system will self-regulate. Just as Linux never killed Windows or Apple, yet now powers 75% of servers, decentralized and open-source AI will coexist with corporate centralized solutions.
But "who wins" is only part of the picture. There's also the question of how we'll live with AI once it stops being just a tool.
From Tool to Teammate
When AI becomes autonomous — starts acting, not just responding — the relationship changes. "It evolves from using a tool to collaborating with an agent," says Ed Musinschi. "People will interact with AI as a negotiator, a collaborator, and a representative. We will see the rise of personal AI agents that manage your information, negotiate on your behalf, and protect your interests online."
For this partnership to work in people's favor, users need to become "sovereigns" with their own programmable agents truly aligned with their interests.
"It's a fundamental shift: from 'user-tool' to 'stakeholder-counterparty.' You'll have a long-term relationship with an agent that has memory, understands your values, and works towards multi-year goals. We'll offload planning and negotiation to them. They, in turn, will depend on us for legitimacy and legal footing. This demands new protocols: for explicit consent, boundaries, observing an agent's reasoning, and even formal 'divorce procedures' for terminating a relationship." (Denis Smirnov, DAO Builders)
Autonomous agents already own wallets, sign contracts, and trade on DeFi — and this is just the beginning.
Alexander Rugaev suggests viewing AI as complementary intelligence: "Humans and machines are good at different things. Models excel at data processing, pattern recognition and consistency. Humans excel at context, value judgments, creativity and dealing with ambiguity. The key design question for each domain is: which arrangement actually serves human values and long-term resilience, rather than just short-term efficiency?"
"The goal isn't just efficiency; it's to ensure AI autonomy amplifies human dignity, not erodes it," adds Genadi Yanpolskiy.
The Window Is Open Now
AI's architecture is taking shape right now. We have a brief window to influence what gets built into the foundation. Will we remain active participants in this relationship, or turn into passive consumers of others' decisions? The answer depends on the choices we make today. If we want symbiosis rather than subjugation, we need to build for symbiosis.
"It's not a simple 'yes' or 'no' to the question of whether we really want centralised AI," sums up Alexander Rugaev. "Instead, we should explore the more open question of what balance of centralization and decentralization we're prepared to live with, and on whose terms. The time to argue about that, out in the open, is now, while the architecture is still taking shape."
This isn't utopia, and there are no guarantees. Decentralization has its own problems: governance complexity, misaligned incentives, new vulnerabilities. But it preserves the possibility of choice. Who owns the data that trains the models? Who can shut them down? Who's accountable when an agent makes a mistake? Whose values will the system reflect?
Centralized AI offers convenience and speed. Decentralized AI offers agency. The balance we choose is being set now.