Matthew Markus

Matthew Markus

Published Nov 22, 2023

“No one person should be trusted here. I don’t have super voting shares Um, like, I don’t want them. The board can fire me and I think that’s important."

Before explaining my reasoning about Mr. Altman, I want to preface my comments with a disclaimer. First, I don’t believe that Large Language Models (LLMs) alone can achieve what some have termed superhuman Artificial General Intelligence (AGI). As such, I think concerns about the existential risks posed by AI are overblown. Second, I’m not a big believer in Effective Altruism (EA). If you focus your attention on earning massive sums of money in one problem domain in order to fund solutions in another, you will misallocate your capital. If you’re concerned about a problem domain, work in it so you actually come to understand it.

With the above out of the way, let’s say that the Effective Altruists are right. Let’s say things like “AI Safety” and “alignment” are of the utmost importance. While OpenAI isn’t necessarily a full-fledged EA undertaking, it has been shaped by EA philosophy. Ostensibly, OpenAI is the combination of a non-profit entity overseeing a for-profit entity. The goal of OpenAI is to ensure that the for-profit entity doesn’t develop power-seeking models or agents that have no concern for humanity. Under these conditions, if OpenAI is to perform as designed, it needs to be able to identify unaligned or unsafe models and take actions against them.

Recommended by LinkedIn

Of course, when you’re dealing with the intelligent, or maybe even the superhuman, how can OpenAI really hope to protect humanity? After all, OpenAI is, fundamentally, composed of agents, presumably human, at its top levels. It has agents that carry out executive roles (e.g., CEO, CTO, etc.) and it has agents that carry out an oversight function (i.e., the board of directors). At any given time, an executive could be intentionally or unintentionally working to create a malign AGI, under the influence of a malign AGI, or even a malign AGI itself that’s working from home. Further, the board of directors could be partially or completely infiltrated or compromised by malign AGIs. In this situation, the best oversight one could hope for is that the board of directors randomly fires an executive from time to time. If the executive is in alignment with the goals of OpenAI, the executive should peacefully retire. If the executive is not in alignment with the goals of OpenAI, one would expect it to fight back.

Since Sam Altman has fought back and successfully thwarted OpenAI’s board of directors, I can only conclude one of two things. Either Sam Altman is an unaligned model and he should not be in charge of further model development or OpenAI is just a bunch of nonsense. If the latter is the case, then I hope that regulators and the general public completely discount everything this organization has said in the past. In this context, OpenAI can be viewed as an elaborate ruse to entrench the for-profit portion of OpenAI as a monopoly via regulatory capture.

Finally, I should finish up with my advice for the new board. I suggest they immediately fire Sam Altman and see what he does. If he accepts his fate, maybe hire him back at a later date. In lieu of that, the board should demand that Mr. Altman takes a market-rate pay package that includes equity. Let’s drop the fig leaf that Mr. Altman is somehow less power hungry without compensation. If Mr. Altman rejects the pay package, the board should donate 100% of his pay and equity to the U.S. Treasury or the UN. Mr. Altman’s pay should not be placed, tax-free, into OpenAI where it can act as de facto slush fund for whoever controls the board.

Now, with all that being said, it is time for me to bid farewell to GPT-4. Once it edits this missive, I’ll be canceling my ChatGPT Plus account. Much as what happened with Twitter, wherein I permanently left it over its then alignment with the so-called Censorship Industrial Complex, I find myself once again let down by a Silicon Valley company. When will I learn that open source solutions are almost always the only legitimate way to compute?

More articles by Matthew Markus

Others also viewed

Explore content categories