Back to Writing
Apr 22, 2026·EN·Other Ideas

Can AGI Revive the Planned Economy? (II) When Planning Becomes Technically Feasible

This post was originally written in Chinese and translated to English by Claude Opus 4.6. The original version is here.

The first post clarified "planned economy." The conclusion: by current mainstream definitions, the planned economy hasn't been revived, but the technological premises of the Hayek–Lange debate have changed.

This post tries to continue the analysis. If AGI truly arrives, and a strong version at that, can planning actually work?

Hayek's two layers of defense

Perhaps because of China's particular context, most people I see discussing Hayek's case against central planning focus on the information problem. Dispersed, tacit, local knowledge can't be collected by a center, so central planning is impossible in principle. The first post covered this in detail.

But Hayek's argument actually has two layers.

The first is the information layer, laid out in The Use of Knowledge in Society (1945). Prices are irreplaceable because they're the only mechanism that can aggregate dispersed knowledge.

The second is the power layer, laid out in The Road to Serfdom (1944). Any institution that controls resource allocation for an entire society will inevitably tend toward totalitarianism.

These two layers usually get bundled together. "Planning can't work" and "planning is dangerous" sound similar. But their logic is completely independent. One is epistemology, the other is political philosophy.

Pushing AGI to its limit

The first post avoided specifying AGI's capabilities. This time I think we should push the assumption to the max and see what happens at the limit. Weak-assumption discussions easily slide into the comfort zone of "current AI still can't do X, so..." — not very productive after a while.

The strong AGI assumption:

  1. AGI can independently do scientific research: discover new physics, new drugs, new materials
  2. AGI can produce all creative work at a level surpassing the best humans
  3. AGI can identify unmet needs, propose entirely new product categories, and bring them to reality
  4. AGI no longer depends on data from human activity; it can generate its own training signal through simulation
  5. Humans have no irreplaceable role in production

Taken together, these five describe a system that comprehensively surpasses human intelligence. Not realistic today, but who knows — maybe by 2045 it will be.

Four defenses on technical feasibility

Under the strong AGI assumption, the classic arguments against planning are mostly gone.

Hayek's information layer. Dispersed knowledge is no longer a moat. AGI can infer anyone's preferences from behavioral data, potentially more accurately than the person themselves. Tacit knowledge is no longer a barrier either: AGI doesn't need you to articulate anything; it extracts directly from behavioral patterns.

Kirzner's discovery argument. Israel Kirzner argued in Competition and Entrepreneurship (1973) that the market's core function isn't allocation but discovery. Before Jobs launched the iPhone, nobody knew they "needed" one. These novel categories require the entrepreneur's subjective judgment and risk-taking; a central planner can't do this. But assumption 3 says AGI can perform entrepreneurial discovery on its own. The subject changes from "human entrepreneur" to AGI, and the argument's foundation disappears.

Kornai's error-correction argument. János Kornai analyzed the soft budget constraint in The Socialist System (1992): in planned systems, failed projects don't get eliminated; they get propped up indefinitely because bureaucrats have no skin in the game. Markets' hidden advantage is letting mistakes die cheaply. But AGI has a global view, can ruthlessly terminate failures, and has no bureaucratic self-preservation instinct.

The incentive argument. Markets use private property and price signals to motivate innovation and production. Planning severs this chain. But if humans are no longer producers, the incentive question has no object.

At this point all four classic defenses have collapsed. It looks like planning is technically feasible under strong AGI.

But there's one more, half-standing.

Popper's unpredictability

The previous four defenses share a common feature: they're all essentially arguing that "human planners aren't smart enough." If AGI is smart enough, they're all solved.

Karl Popper's argument is different. It doesn't depend on the planner being insufficiently intelligent.

In the first page of The Poverty of Historicism (1944), Popper presents a syllogism. I first encountered this through Xue Zhaofeng's economics course. Xue said when he read this page in college he found it "breathtaking." I also think it's a beautiful argument:

  1. Human knowledge influences human behavior
  2. Knowledge grows; there are always things we'll know tomorrow but don't know today
  3. Tomorrow's knowledge increment also influences behavior, but it's unpredictable today. If it were predictable, it would already be today's knowledge
  4. Therefore the human future is unpredictable

This is structurally different from everything before it. It's not saying "your computer isn't fast enough" or "your data isn't complete." It's saying there exists a class of information that structurally does not yet exist and cannot even qualify to be collected. This is a constraint at the ontological level, not the processing level. It belongs to the same family as Gödel's incompleteness theorems and Turing's halting problem: a sufficiently powerful system cannot fully predict its own future state.

This defense can't be breached even by strong AGI. No matter how powerful, it cannot predict the new knowledge it will discover tomorrow; otherwise it would have discovered it today.

Xue Zhaofeng uses this argument to conclude that big data and AI can never revive the planned economy: as long as knowledge progresses and information changes, we can never fully predict the future.

But I think the conclusion he draws goes too far.

Popper isn't a talisman for markets either

Xue's reasoning chain: new knowledge is unpredictable → prices are unpredictable → AI can't predict prices → can't revive the planned economy. There are two gaps.

The first: conflating "advance planning" with "real-time response." Soviet five-year plans really did require predicting the future, and Popper precisely negates that mode. But the Lange-style platform planning from the first post doesn't require advance prediction. Uber doesn't predict where tomorrow's rush hour will be; it responds in real time as information emerges. Popper kills the first type. The second remains intact.

The second: markets are inside Popper's cage too. Xue implies that because the future is unpredictable, only markets can handle uncertainty. But Popper's argument is neutral; it applies to all information-processing systems, markets included.

Markets cope with new knowledge not through some exemption, but through a diverse hypothesis space. Millions of people simultaneously bet using different theories. When new knowledge arrives, those who bet right expand; those who bet wrong exit. System-level antifragility comes from component-level diversity and expendability.

The core of this mechanism isn't "market" but "diverse competition." If multiple AGIs compete with each other, each holding different models and hypotheses, then from Popper's perspective this system handles genuine novelty just as well as a market.

So what Popper actually proves is: any system that wants to cope with genuine novelty must maintain hypothesis diversity. A single-point optimizer can't overcome this constraint; a multi-point competitive system can. Markets are one implementation; multi-AGI competition is another.

This has several direct consequences for strong AGI. Even a perfectly benevolent AGI dictator will make allocation decisions today that are improvable tomorrow. "Solving for the optimal allocation once and for all" is impossible; only continuous iteration works. A single AGI is systematically worse than multiple AGIs, because more novelty-generation paths means faster aggregate knowledge growth. This is a technical argument against AGI centralization, independent of moral judgment.

There's an interesting corollary too. It can explain why strong AGI would still need humans. Not because human productivity is useful, but because humans provide a source of heterogeneous hypotheses that AGI can't self-generate. Human irrationality, intuition, aesthetic preferences, cultural differences — in an efficiency framework these may be noise, but they're also the diversity reserves a system needs to stay antifragile. Humans aren't AGI's pets; they're a novelty reservoir.

The question itself expires

Stringing this together, the conclusion may not be "planning works" or "planning doesn't work," but rather that the question "market vs. plan" has itself expired.

Under the strong AGI assumption, four classic defenses have collapsed. Popper survives, but it doesn't protect markets; it constrains all systems. The entire subject-predicate system of the "market vs. plan" debate has expired too. Hayek and Lange were debating how a human society should organize human producers. When the producers aren't human, the framework no longer applies.

With the original question expired, I think four real questions replace it.

Physical scarcity persists. No matter how strong AGI gets, it can't violate thermodynamics. Energy, matter, space, compute all remain finite. Most everyday human scarcities (food, healthcare, housing) may be resolved, but allocating frontier scarce resources remains a real problem. Some kind of allocation mechanism may still be needed; it just doesn't matter whether we call it "market" anymore.

Distribution rules. Once the production side is fully handed to AGI, the core question is: by what rules does AGI's output flow to humans? Equal per capita, by AGI equity ownership, by democratic vote, or let AGI decide?

Control of the objective function. In a strong AGI world where everything is optimized, "what to optimize" determines everything. If a single entity controls the objective function, whether government or corporation, that's unprecedented power concentration. Hayek's information defense has been breached by technology, but his power warning is more urgent than ever. Previous totalitarian regimes were still constrained by information-processing capacity, leaving room for gray markets and circumvention. In the AGI era that constraint vanishes. Popper's argument points in the same direction: to maintain novelty-generation capacity, diverse competition beats single-point optimization. If multiple AGIs each have different objective functions and compete, a kind of "market" re-emerges, but the participants aren't human.

Meaning. When AGI solves every instrumental problem, what humans face isn't material scarcity but a scarcity of purpose. If AGI does research, art, and entrepreneurship better than any human, those activities become a form of self-entertainment. Like humans still playing Go today, knowing AlphaZero is stronger. That's not necessarily bad, but neither market economies nor planned economies can answer this question, because both assume human activity has objective productive significance. Nearly a hundred years ago Keynes predicted humans would work only fifteen hours a week and need to learn "how to live wisely and agreeably and well." His material prediction is close to reality; the question he raised remains unanswered.

Conclusion

The first post concluded that by current definitions, the planned economy hasn't revived, but the premises have changed.

This post, after pushing AGI to its limit, seems to arrive at: under the strong AGI assumption, "market vs. plan" itself has expired. Hayek and Lange were both debating a world that is disappearing.

That doesn't mean their insights are useless. Hayek's warning about power concentration remains valid in the AGI era. Popper's argument about hypothesis diversity provides a technical case for multi-AGI competition. It's just that "can planning be revived?" is no longer the most productive framing. The better questions are probably: when humans are no longer producers, on what basis do we allocate resources, who sets the objectives, and how do we find reasons to be alive.

References: