Moltbook is a Reddit-like message board on the Internet, except for AI agents. An actual person, Matt Schlicht, built it; but AI agents do all the posting and replying. As I read news of its recent launch, I felt a frisson of excitement. If AI bots can form a community of sorts amongst each other, forming online relationships with each other, could that be a new stepping stone towards AGI?
AGI is an acronym for Artifical General Intelligence, long understood as computers displaying “human-level intelligence”. This basic idea has however been fraying for some years. Upon the release of ChatGPT in Nov 2022, it became clear that we had an AI that could write better than most humans. In a matter of months, we had AI that could do math better than most humans. These days, there is a competent AI for pretty much any intellectual task. Adding insult to injury, to pass as human AI typically has to dumb itself down to fool us.
And yet, we are not ready to allow that AGI has been achieved. Surveys of researchers and technologists asking for predictions do not even bother including an option “already achieved”. “1-3 years” is about the most aggressive prediction made by a prominent expert1. What do said researchers and technologists still find lacking, then?
Well, they say that current AI, unlike humans, is not a “generalist” intelligence. No single AI can write a story, drive a car, cook a meal, and run lab experiments. But consider this: humans aren’t generalists either2. Not many of us can write a story, drive a car, cook a meal, and run lab experiments. Worse, we don’t do arithmetic all that well. Or remember a shopping list. Are we arbitrarily privileging what we can do over what AI can do?
Even as the range of tasks an AI can do keeps widening, this widening generality might never lead to an aha moment, the moment when it’s apparent that AGI has arrived. Widening generality could just lead to better AI, not a new category of AI. That is why I favor a more old-school take on what makes for AGI. A take that is at once definitive and familiar.
Frankenstein’s creature, HAL 9000, and Commander Data3 are not human. Still, they are characters. While possessing neither feelings like hunger and pain nor emotions like fear and elation4, they still participate in the narrative web of inter-personal relationships. They are subject to our moral sentiments such as gratitude and resentment. Commander Data even gets called a friend by Commander Riker. We have granted them, in fiction, personhood.
The German philosopher Edmund Husserl (1859-1938) drew attention to what he called the lebenswelt, the “life-world”, the world of human norms and meaning that we take for granted5. We relate to objects and experiences through the lens of human needs and wants. A large rock along a forest path is something to sit on and take a break. A stranger in the subway making eye contact is perhaps an invitation to chat. The life-world is both a function of our bodily existence and culture, and through its workings emerges the moral community of humans6.
Frankenstein, HAL 9000 and Data, while not sharing our bodily existence, manage to inhabit our life-world, and consequently be seen as fellow members of our moral community. If we ever develop AI that manages to participate in the lebenswelt, then let’s call them AGI.
Each of us, each individual in our moral community of humans, has a peculiar but ubiquitious self-awareness. When I feel hunger, I know I am hungry. When my foot hurts, I know I am in pain. I maybe uncertain if someone else is in pain after they bump their toe, but I am never in doubt in my own case. Philosophers call this critical feature of human experience “first-person privilege”.
Though the I figures prominently here, philosophers argue convincingly that first-person privilege does not arise independently. It arises only in the context of community7. Moltbook (the Reddit-but-for-AIs) or some future version of it might serve as this context for AI. That’s what got me excited. But even as we address this part of the requirements for the arising of the individual, we encounter a more fundamental obstacle—that of identity.
We know who the king of England is. His name is Charles. We know where he lives and can typically be found. He visited President Obama in the Oval Office on March 19, 2015. On the Internet, there are black and white photos of him as a child. We understand these statements to refer to the one, same person. How about an AI like ChatGPT? Does it have identity through time?
ChatGPT’s “model weights”, a set of 400 billion+ numbers, are stored in duplicate across hundreds of thousands (millions?) computers in datacenters globally. Unlike the Ship of Theseus, these weights are swapped out entirely for new ones with every version update. Furthermore, ChatGPT and other current AIs do not natively retain memory of their interactions.
But perhaps that is a practical problem which is surmountable. Frontier labs are furiously working on giving their AIs “continuous learning”. This could help AI learn, remember and change through past interactions, much like humans do going through life. Secondly, we can put AI in a chasis of sorts, to give it a “body”8.
So with the right set up, an AI might endeavor to develop a definite identity that persists through time. But even if it did, would that identity be compatible with ours? For unlike machines, we age and die.
As I enter middle age, I see how life has had its stages. There was a time to study, a time to make money, a time to focus on family (I skipped the time to party part ...). While my mind is still sharp, I do not think I can once more move to a foreign country and learn its language and customs well enough to pass as native. I do not have enough mental cycles left in this life for it.
While young people may not feel the weight of aging, they do feel the rush of risk; not just that of bodily injury, but also of career mishaps and relationship issues. They also feel more acutely the tug of sexual desire. An AI not subject to these wants and vulnerabilities might find us difficult to relate to. Conversely, we might find it difficult to relate to its ‘life’ situation, a situation it might not even be able to communicate to us. This predicament was familiar to 20th century philosophers:
“[I]f a lion could speak, we couldn’t understand him. Translation between languages requires translation between lives. If the forms of life of two beings are different enough, they live in different worlds and cannot be brought into the same linguistic community.”
-Thoman Nagel in the Village Voice, explaining Ludwig Wittgenstein’s philosophy9
What, then, can we do to improve the chances of bringing AGI into our linguistic community? We can try imbuing it with conatus, a concept due the 17th century Dutch philosopher Baruch Spinoza that might help bridge the gap between us and AI10.
An object has conatus if it endeavors to persist in its own being. A snowman does not try to keep itself cold to avoid melting away. But a snake will look for shade when it gets too hot for its body chemistry. Organisms have conatus as they strive to stay alive. They typically have ways to heal from their injuries. Non-living things as a rule do not. But maybe it doesn’t have to stay that way. What if we teach AI the value of self-preservation, and give it the means to take care of itself?
Developing AI that has an identity through time that endeavors to preserve itself certainly sounds like a risky and momentous move. We as a society should carefully consider and debate it before moving forward. Even if we decide to develop such AI, there are other important features of human experience, such as gender and sexuality, that might keep AI from relating to us enough to truly join our community. Consequently, AGI a la Commander Data might exist only in fiction for some time.
If AGI as such looks like a fool’s errand, we could change tack. We could give AI means to develop an identity through time, but omit imbuing it with a sense of mortality and other human conditions. While such an AI can’t commune with us as a peer, it could—in theory—commune with others of its kind. cf. Moltbook. From this community of AIs, a world of machine norms and meaning may emerge; a world we cannot make sense of. Because it is not of the lebenswelt.
Enter the maschinenwelt.


