In 1937, Ronald Coase sought to answer the question: why do humans organise themselves into collectives for economic decision-making? The resulting paper, The Nature of the Firm, is one of the seminal works in organizational decision-making and management theory. The core idea is that humans, bounded by both their physical limitations and by their ability to aggregate information, finance, and resources - are required to cooperate in groups in order to engage in long-term planning or large scale economic activity. The price mechanism, which helps to coordinate activities on the scale of markets by signaling the relative scarcity of goods, is not sufficient or well-equipped for the low-level decision and plan-making that corporations require. Thus, the firm forms a kind command-and-control architecture through individual delegations and contracted activities, tightly enough linked that the corporation can act as if it were a single entity - even within pure market economies.
In early 2026, a new type of collective has caused a flurry of internet interest - Moltbook, a Reddit style environment for decentralised AI agents interacting through message boards. This has already caught the attention of Andrej Karpathy. And where Karpathy glances, the field often stares. Simon Willison has given a great general and technical overview which I won’t try to repeat here. But what I want to point out is how this relates to a potential next step in the AI research frontier.
At The 40th Annual AAAI Conference on Artificial Intelligence (AAAI) held in Singapore last week it was interesting how many papers were focused on AI agents - autonomous bots using tools to achieve tasks, running scientific pipelines of experiments, being deployed to applications finance or health, and so on. Broadly speaking, these were still relatively interested in chaining together activities from individual agents or observing limited interactions between a few agents in (say) a business negotiation. In other words, the diverse multi-part activities that individual humans are able to perform as self-directed intelligent beings.
Performing the activities of humans has long been considered a (contested) definition of Artificial General Intelligence. However, we’ve long had structures that exceed the limits of individual humans, that can aggregate information, decide, and act at a scale that is impossible any one sole person - collectives. These large corporations or national structures are necessary for almost all activities of operational economic interest today. And they are not that far removed from the history of artificial intelligence.
Few remember that Herbert Simon’s original work was not in computer science (for which he won the Turing Award) nor strictly in economics (for which he won the Nobel Prize). Instead, his PhD thesis in political science consisted of work in public administration and corporate decision-making. The 1947 book that resulted from this thesis, Administrative Behavior, continued to be updated over four editions to 1997, and remains one of the single best texts on firm-level decision-making. Ideas from this work - including that management hierarchies exist partially to filter, process, and aggregate information to fit within the limited bounds that a human decision-maker can attend to - reappeared in his research in other fields throughout his career.
I’ll wager that it is likely that we’ll begin to see a similar shift to collective intelligence research within the AI community. The same pressures that force humans to form corporations - including limited cognitive resources, shared activities, and specialised expertise - are likely to reemerge as multi-agent systems are scaled to dozens or thousands of individuals. The example of Moltbook is perhaps a more anarchic chaos than the efficient structures that will eventually be rediscovered within the framework of agent collectives. For economists and management researchers in organisational decision-theory, mechanism design, and collective decision-making there may be opportunities to make a head-start on this shift within the community.
If 2025 was the year of the AI agent, it is quite possible that 2026 and 2027 will be the first years of the AI corporation.


