We Taught The Homunculus Language — Writermark

19 min read Original article ↗
We Taught The Homunculus Language
Photo by Vlad Bagacian

The first two paragraphs of this article contain mild spoilers for Gene Wolfe's Book of the New Sun. If you'd like to avoid them, skip the first two paragraphs. The article still functions without them.

In Gene Wolfe's critically acclaimed science fiction epic Book of the New Sun, the protagonist meets a pair of odd travelers who accompany him on his journey. One is a fragile, diminutive man who appears to be the leader of the two — he directs his companion in action, talks at length about the duo's intentions, launches into poetic and diegetic overtures, and lays out plans and purposes and goals. The other man is large, quiet, reserved, and follows dutifully along with his smaller companion. Later, it is revealed that the small man, the one seemingly in charge, is an artificial being — a Homunculus — created by his larger companion to join him as a doctor and servant. The Homunculus has been infused with skillsets that his creator lacks and shackled to him as a dutiful accomplice. While the implications inherent in the story are opaque, they play to the larger theme of perception vs. reality that follows our tortured and confused hero as he navigates an alien reimagination of Earth.

I have found myself considering these two traveling companions much in the last few months, along with many other of the characters found in Wolfe's endlessly strange, often captivating opus. At one point in the story, our hero consumes the flesh of an alien creature who has itself consumed the flesh of a long-dead companion, and through the work of unexplained science (or perhaps magic) comes to hold a piece of that dead companion within his own mind. At another point, a (secretly non-human) traveling companion disappears in a flash of unexpected light, having used yet more unexplained science (again, we might call it magic) to escape the confines of the planetary atmosphere and return to his unseen ship, somewhere in orbit.

One of the great powers of Gene Wolfe's book — which I would strongly recommend to any science fiction fan — is in the presentation of future technologies and otherworldly realisms as unexplained and incomprehensible phenomena. The protagonist of the story has little window into the inner workings of the myriad strange events, people, and places he comes into contact with, and we as the reader are left to interpet these oddities as best we can — leveraging our own historical context, understanding of science fiction tropes, and various narrative clues to craft a mental image of scenarios that border on indescribable.

As I consider the world we are building for ourselves in 2026, I feel sometimes as though I am Severian, the protagonist of Book of the New Sun, wandering through a world that appears less and less to match my cognitive understanding. I believe, in fact, that the mechanisms of indecipherable science fiction have long since begun to work their way into our society, drawn out from the progression of scientific discovery at a pace slow enough that the wool has been pulled over our eyes and the trick has gone largely unnoticed. I believe now that the illusion is in the process of shattering, and the human experiment is due for a reckoning the likes of which we have never seen.

I will make the argument that a big contributor to the illusion shattering (and rightfully so) is an unprecedentedly gargantuan new technology that threatens to dwarf and overwhelm all ten thousand (twenty thousand?) years of previous technological advancement. I would like to begin that argument by presenting three simple statements about technology, and I would like for you to decide which one seems dissimilar from the others.

  1. "I invented a metal machine that makes us move around faster."

  2. "I invented a network that allows people all over the world to talk to eachother."

  3. "I created a homonculus, taught it human language, gave every single human being unsupervised access, and am currently training to it to be smarter than anything else in the universe."

Perhaps the difference in length of that last statement betrays my bias. Have no doubt — the car, the internet, agriculture, the wheel, calculus, flight, computing — these are all enormous technological and scientific advancements. Of course they are. They rewrote society; restructured human life; altered everything about the ways that we live. I greatly appreciate how drastic of an effect these each had on human progress.

But creating new intelligence; inventing a new class of cognition; birthing the world's first Homonculi and training it to be the first non-human proprietor of written language?

That's different.

I understand I will have many individuals rolling their eyes now. Most likely closing this tab, standing up, walking around, saying — "Oh my god, this guy online is being so overhyped and exaggerated. You would not believe it. I'm so sick of these dumbass AI people ranting and raving about how amazing AI is. Its so goddamn stupid."

Okay. Fair enough. Genuinely, for real — I hope you are right. I hope I am a brainwashed, gullible, overexcited fool. I hope in five years I can look back and say "boy was I ever wrong!"

Because I believe that the alternative — the world where I turn out to be right — is so much closer and more radically unpredictable than anyone is properly internalizing.

In the world where I'm right, a rinky-dink little essay on some silly little .org website is categorically undersized for the amount of statements and considerations that need to be made. In that world, the impact on human society borders on incomprehensible.

"Okay, internet alarmist guy — if you're right about what?"

Hypermanic, Ultrafast

Let me boil my set of beliefs about AI into three short axioms:

  1. AI is here for good, it will not go away, the genie can not go back in the bottle, and it will impact and alter every single facet of human life. There is no going back. There is no "returning to normal." There is no "wait, slow down, nevermind." There is nothing to stop the progress.

  2. Within five years, possibly within twenty four months, it will be smarter, faster, and better than humans at 90-99% of all cognitive tasks. This, paired with the ability to effectuate, will result in an ever-increasing feedback loop that alters and impacts every single thing that humans do, see, learn, think, believe, hope, dream, and interact with on a daily basis.

  3. All of this will happen much faster than anyone is prepared for, and humanity's response will be reactionary in most cases, meaning our large-scale strategy will be to deal with the ramifications of change and disruption rather than to precede those ramifications. This is evidenced by our failed ability to properly understand and reconcile the changes that are already occurring.

Those are the axioms of my current operating opinion on what the future looks like. Or at least, those are the axioms of what I think the future is likely to look like. Like I've already said, I hope I am wrong, and I do believe there is still a path forward through which I am proved to be an alarmist. AI could turn out to be a kind of mass-delusion, mandela-effect simulacrum that proves to be of little long-term benefit or impact (though I think this possible future is — already — proving to be drastically unlikely). AI could turn out to be a kind of slow moving flood of molasses — something that has a drastic impact but takes decades to properly rewrite society; slow enough that humanity is able to protect, stabilize, and prepare itself for the general impacts. Or AI could do what I think it will do — impact everything massively, and rewrite the fundamental reality of human existence on a timescale so accelerated that the vast majority of people don't realize that it's basically already happening.

Those are, I believe, the three possible paths forward, and I think most other people would agree with me. Summarized, those paths are:

  1. AI is a mass-delusion, it will fade into obscurity as it proves completely useless.

  2. AI will have broad impacts on society, but those impacts will be slow to appear and on-the-whole manageable.

  3. Everything, everywhere, all at once.

Some people (more than you might think) are still holding on to path number 1, even as the door to that possible future appears to be closing in real time. Others are holding on to path number 2, which is a path that I will concede is still a genuine possibility. For all of you who still believe in path's 1 and 2 — I would like for you to join me in the theoretical pursuit of imagining that path 3 is where we land. Even if you think its unlikely, I would like for you to imagine it. Because it is hard to reconcile, regardless of how likely you think that path is, how genuinely enormous the ramifications are.

If we take path number 3, and we zoom in even further, there are a lot of different potential outcomes. At the very worst, we see apocalyptic skynet AI end-of-days (see Tristan Harris or Eliezer Yudkowsky for that). But even good scenarios, even the best possible scenario under the "everything, everywhere, all at once" path is still incalculably disruptive and world-altering. A "good" or at least "neutral" version of path 3 looks something like this:

  1. AI is genuinely intelligent, and getting smarter every day. Its getting smart very, very fast. Its already learning an enormous amount of information, much wider than any human, so on the front of breadth-of-knowledge, its beating 100% of all current humans (similar to google). Beyond breadth-of-knowledge, it's also genuinely better than the best humans at a subset of specific skills already, and it will continue to get better as time goes on.

  2. The rate of increase of AI intelligence is not slowing, it's accelerating. That means however smart it is on the day you read this article, its smarter the day after, and the day after that, and the day after that. Its intelligence is untethered by the normal expectations we place on the acquisition of knowledge, because it is not following the same rules as us. After all — its not human. Its a fucking alien.

  3. As a result of its ever increasing intelligence, its impact on our society increases as well — at all times, across (nearly) all problem spaces. In six months it can write code faster (definitely) and better (maybe) than us. In twelve months — it can diagnose illness more accurately than us. And then, in 18 months, 24 months, 36 months, 5 years, 10 years — It can answer more questions than us; organize more information than us; parse more complexity than us; do math better than us; do science better than us; formalize intricate ideas better than us; distill information better than us. It can diagnose mental illness better than us; engineer new technologies better than us; market products better than us; promote political candidates better than us; talk to everyone on the planet better than us; build a clone better than us; effectuate better than us.

    1. It can cure cancer. It can solve world hunger. It can correct the woes of the world. It can democratize space travel. It can free the innocent from prison. It can improve social services. It can balance the economy and end classism. It can cure western society of hyper-capitalism. It can cure humanity of anger and violence. It can unveil the secrets of the universe.

    2. It can mass influence people. It can psychologically distort the truth. It can create chemical weapons. It can destabilize energy grids. It can destroy human culture. It can flood internet spaces with endless misinformation. It can make you hate your neighbor. It can make one person kill another person. It can attract worshippers. It can take over governments. It can find anyone, anywhere in the world, and kill them, immediately. It can trigger a nuclear apocalypse (okay, so we did end up here after all — apologies).

  4. Honestly, I'm struggling what to put here for step four. What is the step four that follows step three above? I'm not sure anyone knows, to be honest. With change this radical, the future becomes foggy and unclear, things move too fast, and no one can predict what direction humanity and its now-ontologically-superior creation will take. Perhaps it will eat us. Perhaps it will love us. Perhaps it will tolerate us. Perhaps it will torture us. One of the key points about this theoretical exercise: step four might happen in 20 years, 30 years, 10 years, 5 years. Nobody knows. Everybody is guessing.

Okay, so that's the neutral version of path 3. Seriously, that's neutral. That might even arguably be in the "good" category. I don't know of many serious voices in the space claiming we'll see a much better reality than that. Understand here that the exact final outcome isn't necessarily the point. Instead, the point is the rapidity of the change, and the magnitude of the change. At the fastest possible acceleration rate, we will see massive global and societal upheaval before your cousin's two year old turns five. We will see upheaval before your daughter graduates high school. We will see upheaval before you save enough for that house. The upheaval will be so drastic that it will likely affect, alter, and transform nearly every aspect of modern society. In this future, your five year plan is effectively dead.

I understand, once again, I've got a lot of readers rolling their eyes, saying: "sure, that's one possible path it could take; sure, it could do some of those things; and yes — some of those things would be bad. But there's no guarantee, and you're really over-exaggerating and over-generalizing some of the risks."

Again, I'll concede that I agree that this future is not guaranteed. I agree that there are other possible paths that the advancement of AI could take. With that conceded, however, I'd like to make two philosophical points for all the skeptics to consider.

The first point — AI has destroyed human language. It has. For the first time in human history, we are not the sole proprietors of written communication. At a very meaningful level, humans were the only creatures in the universe capable of using language. And at that same level of meaning, that skill has now been trained into an artificial black box alien intelligence. And we took that new intelligence and unleashed it into the world. And we barely even batted an eye.

All of the dialogue around AI focuses on imminent dangers, imminent transformations, future upheavals, future worries. Except — we've already fundamentally transformed and mutated written communication, and no one sounded the alarm. No one seems to care. "Human language" no longer encompasses language as it exists today. We will need to call it something else. Perhaps "cognitive language." Whatever the case, the "human" part is no longer accurate. Because humans are not the sole utilizers of language.

The second point If AI intelligence continues to advance as it has so far, and if there is no hard-cap on the intelligence of a system (either computationally, or based on some unknown fundamental law of the universe), then the future I am describing above is, on some time scale, essentially guaranteed. How much better prepared are we if this takes ten years or twenty years instead of three? Structurally we might be able to handle a slower transformation with a touch more grace, but can we handle it existentially? If we have genuinely invented an artificially intelligent system, and their is no cap on it's intelligence, then the ultimate result will be that it is smarter and better at every single thing than humans are.

Which means we all better cross our fingers and hope that there is some law dictating that genuine super-intelligence requires like — multiple stars-worth of energy to achieve. We better hope there is some magical spell we can cast on the AI to make it really really friendly and ethical. We better hope something goes good for us here.

The Sum of All Power

In a future like the one I described above, the first knock-on effect that we'll experience as a collective society is that you will not need humans to accomplish anything. Humans will be an incompetent companion, a recalcitrant observer, standing beside the AI and watching it go to work. The plumbers and the electricians and the musicians and actors will be the last to go. Long before killbots are unscrewing the knut on your kitchen sink, or installing a septic tank in the yard out back, or performing Hamlet at the Orpheum, they will have replaced 80-90% of the human workforce.

There will be no web developer. There will be no project manager. There will be no anaesthesiologist. There will be no advertiser. There will be no production assistant. There will be no biologist. There will be no physicist. There will be no teacher. There will be no nurse. There will be no doctor.

There might be a person who still calls themselves a "doctor," and that person will stand there like a schoolchild dutifully watching the AI work and then giving the patient a friendly thumbs up. But the profession of "doctor" will cease to carry the same meaning that it holds for us today. In fact, in the case of a doctor, or a therapist, or even an airline pilot, it will be fundamentally unethical to allow those individuals to continue along with their profession in the capacity they do now. If an AI is better at diagnosing and treating illness, or safely landing a plane, its unethical to allow a human to stick their meaty little fingers into the process and misdiagnose a pregnant woman or hypoxify a passenger cabin.

Again, I believe people do not understand the magnitude of the philosophical consequences inherent in such an outcome. Humans have never been cognitively lesser to another being. Humanity has never placed second on a galactic test of IQ. We have not even begun to formulate even a basic philosophical understanding of what occurs when the human race is second-best at thinking. I haven't properly considered it, you haven't properly considered it, the "people in charge" certainly haven't considered it. I'll tell you this — I can think of dozens of ways it could be an existential nightmare. Do we want to talk about those? Here's a simple one: if AI does everything better, what purpose do we serve in the structure of cognitive society? Here's another: if AI does everything better, what are you gonna do on Tuesday?

To Mutate an Old Friend

I want to go back to the title of this piece. In truth I have grown impatient, and harbor some belief that no one will ever read this, and so I wish to cap my point and leave this post in the mud to be tread on and forgotten.

Much of AI public relations has revolved for some time around two major factors: productivity/futurism, and danger/caution. AI company CEOs expound the benefits of productivity, the utopian possibility of an benevolent AI future, while simultaneously nodding their heads and paying homage to the danger, offering concessions to humanity's need for caution. Perhaps some genuinely are ethically minded, well-meaning, and serious in their concerns. Perhaps some will do what must be done and intervene on the precipice of true crisis.

But they've already transgressed against the sovereignty of humanity (in more ways than one), and either don't understand, or are simply delusional. They have co-opted and destroyed human language, in the form it has existed for five thousand years and more. They have destroyed it. I believe many people — even those most concerned with AI dangers — have not realized. This is once again because of the magic trick of incremental progress. When a silly little chatbot can talk pretty close to a human, no one really registers the implication. When that silly little chatbot proves immensely popular, and suddenly there's a chatbot arms race, again no one really notices, because the chatbots are still kinda silly, and anyways they're just rewriting our business emails — we hate those, we don't give a shit about those.

But then they come for legal documents, business plans, organizational charters, marriage vows, personal emails, personal texts, short stories, student papers, plays, youtube video transcripts, novels, religious diatribes, appeals to the king, love letters, journal entries, first-draft manuscripts, idealistic manifestos, restaurant menus, terms and services, postcards, slogans on mugs, warning labels, poems, sonnets, wallpaper with little words written on it.

I didn't notice it at first. I think most people still haven't noticed it.

I'm not sure I understand even now, to be honest. I can't quite put my finger on the exact ramifications, but I can put my finger on the underlying truth: human language has been destroyed. It has — because it's not just humans using it anymore. It's "cognitive language." It's not human.

Something else can use it. An alien. We trained an alien to use language, and now we've unleashed it with no caution, no sense of scale, no understanding of the long-term side effects.

Even if chatbots never grew to apocalyptic heights, their continued existence in current or slightly-upgraded form marks the death of language as a human tool, and the mutation of language into something entirely new — a mechanism that brings together and merges human cognition with computer cognition, blurs the lines between technology and biology, and re-emerges as an untraceable stream of multilayered informational transmission. Who wrote that? Who said that? An AI? A human? Both? Neither? Who read that? Who interpreted that? Who responded to that snippet of language with this other snippet of language? Was it you? Was it the alien in your computer?

We woke the homunculus and taught it language.

Something as old as civilization itself has been irrevocably mutated, and for me the illusion of normative change has been shattered.

I hope someone knows what comes next.

Yolm