The Sam Altman Playbook: Fear, the Denial of Uncertainties, and Hype
garymarcus.substack.comJeez, I'm a Sam Altman skeptic but this is just another level. How about instead of attacking this guy and literally deconstructing a single-sentence tweet to make him out as an evil bogeyman, we make some constructive arguments for these so-called 'more reliable' ways of building AI and why the current approach is 'deeply flawed'?
> we make some constructive arguments for these so-called 'more reliable' ways of building AI and why the current approach is 'deeply flawed'?
FWIW, Gary Marcus wrote an entire book[1] that, from what I can tell, purports to do that. I have not read the book (yet) so I can't personally attest to how well it meets that goal, but at least be aware that it exists.
[1]: https://www.amazon.com/Rebooting-AI-Building-Artificial-Inte...
Tldr. is that he's vaguely arguing for neuro symbolic ai without proposing anything specific that's implementable let alone building it himself. It's just snarky pointing out flaws and kvetching at people who actually build stuff with him.
I read the article and oddly enough I have to admit I share some of its vibe. You see, when Sama was on his previous tour, I expected to learn something interesting regarding the future of LLMs. Instead, the gu talks for hours on how they are making a new cryptocurrency, and have a plan to scan the retinas of all humans on Earth and so on. Seriously, it was a very weird and unsettling experience.
In any case, it was clear that he doesn't care at all about the "Open" part of OpenAI and that he sees a future where everybody use OpenAI services rather than having their own, powerful private models. I'd say our interests are very much in opposition.
I don't like the method but I do also think that this guy is a borderline-scammer. There are tons of things he should not say about the technology OpenAI develops that he intentionally does to create hype, even if it is irresponsible.
But the worst to me, by far, is to make OpenAI a for-profit organisation that does not release the models. More than that, he ordered to kill official support for other projects like OpenGym and others. The impact of this change does not matter. It is dishonest and possibly has some level of illegality in it. He is literally surfing the hype as did the crypto guys.
As someone else who is not an Altman fan and generally skeptical of people pushing weird AGI scenarios:
I do not think Gary Marcus has anything interesting to say about current AI, and by that, I mean anything that's not a cheap gotcha, restatement of an obvious fact, or something entirely disingenuous.
In a similar boat, and I find Marcus weird. Initially I thought he had some valid complaints but the more and more I heard the more he was pushing it into unreasonable territory (maybe he started more rational and went off the deep end ¯\_(ツ)_/¯)
This is a problem I have with a lot of people in the AI safety or criticism space. There's a lot to criticize LLMs and AI for. There are a lot of real and concerning things that can cause real world harm, with systems we have __now__. But attacking him or any of the AI doomer/X-risk stuff just muddies the waters and makes those real conversations near impossible to have. Just primes people the wrong way. I'm not a conspiracy theorist but I'm not surprised by people that think these two groups are on the same side. Just seems like a lot of attention seeking and we can fight the hype without creating a different kind of hype...
Sam Altman is the poster child of the silicon valley techbro (derogatory).
He and his orbiters have done real an irreparable harm to the Internet while profiting enormously from unlicensed copyrighted works (aka piracy).
"Open" "AI" is pure hypocrisy, an embodiment of the two parallel sets of rules: one for the commons and one for the elite.
Sama deserves more than a little pushback on and scrutiny of his claims.
> He and his orbiters have done real an irreparable harm to the Internet
He and his orbiters have done real an irreparable harm to the ecosystem by pushing such intense compute systems unthinkingly.
You mean the environment? Absolutely.
We can't have discussion of that on HN, though, so the moderator has flagged it.
Users flagged it.
I stand corrected.
I still despise the HN policy of "no criticism of tech CEOs" whether official or otherwise.
Try to say something bad about Steve Jobs? Nah that's not "curious conversation".
Whatever.
It wasn't flagged for being critical of Altman, though. There are long, critical discussions of Altman here all the time. It was flagged because Gary Marcus is fast descending into provocateur-ish nutterdom and the article is, quite simply, poorly constructed rage bait.
About your own two downvoted posts: They probably aren't downvoted for the reasons you think! Your opinions are pretty widely shared here. A lot of people agree with you about the copyright issue, about the farcical non-openness of OpenAI, and about the environmental stakes. It's the other bits (insults, complaints about moderation) we can do without.
The faux-politeness is one of the reasons HN has a reputation for being arrogant and aloof.
I think Sam is a bad person who deserves to be insulted. I prefer to call a spade a spade.
The manoeuvre to become a for-profit organisation after years without paying taxes tells something deeper about OpenAI true way of operating.
Accepting to do deals with people who have no shame for pulling such trick is to accept they will do it again in a much bigger scale.
Nothing different than other smart business structurations, the focus in Sam, whatever he deserves, seems to be on purpose to not talk about the forest. Sam is an easy target, it does not require too much research.
Yeah, it’s unfortunate that suck ruthless pricks have control over such a powerful and important technology.
Should be illegal.
For what it's worth, try to take it easy on each other in this thread: I cannot think of a worse messenger than Gary Marcus for this, he's directly equivalent to Sam, but on the downside.
AI-as-meme has been around long enough now that its being force-interpreted into two camps, "its all a scam, at best it only knows how to reproduce exact training data" and "it's glory shall have us working 0 hours by 2030". This article won't shed light on reality, the in-between.
ex. lets walk through the opening:
1) "How do you convince the world your ideas and business might ultimately be worth $7 trillion dollars" -- he's referring to an unsourced rumor, that never made any sense, that Mr. Altman approached Saudis for $7 trillion to build GPUs. Even the nonsensical gossip rag rumor explicitly has nothing to do with OpenAI, part of the "intrigue" was the perfidy of Sam doing it separate from OpenAI.
2) "Sam Altman is on a tour to raise money and raise valuations...at some of top universities in the world", figuratively, maybe, but not actually - they just raised a round in December and this isn't a company that needs to do PR at universities to catch investor attention.
3) "A few days ago at Stanford, Sam promised that AGI will be worth it, no matter how much it costs" -- actual quote: "I don't care if we burn $50 billion a year, we're building AGI and it's going to be worth it" -- that's not a "promise" nor "no matter how much it costs" -- yes, $50B is functionally 'no matter how much', so I'd give Gary charity of interpretation on that too - except as long as we're doing that, why are we over-reading Sam?
Long time HN-er
I'm flagging this. It's not because I give a hoot about sama but because this kind of crap is posted only to lead to endless discussion.
Get to work. Focus. Whatever the hell Sam is can wait for another day.
Yes never discuss technology critically, only grow and make more more more. Never consider the actual impact of your work, simply produce and consume.
I find that hilarious because I actually feel the opposite way.
We have ignored the impact of what we do far, far too long. Forums like this facilitate this false dichotomy. Society IS tech. To believe otherwise is stupid.
But I have found a mantra: if our conversation is in any way getting into whether somebody is good or bad? I am no longer a positive influence on society.
This failed my test. It has nothing to do with Sam Altman. If you want to talk about X person good or bad, go the hell else. I have actual work to do.
If society is tech, then people are society. Leadership should be scrutinized more than anyone else in the industry, they are ultimately responsible for setting the direction of the industry in our unfortunately corporate dominated tech sector. Flagging discussion about them because you think it's simply gossip is not helpful. If you don't like a conversation then don't participate in it.
I'll also flag, and I've never flagged anything before.
Sam A's job is to sell Open AI. It's not not interesting that he talks about why is so great!
3rded.
There's nothing substantive in this article. The author expresses a seething hatred for OpenAI's CEO by repeatedly lampooning ("where's the evidence!?") brief quotes, without context, of Altman's opinions/predictions. I hope that the author escapes whatever rut made him so overwhelmingly bitter and resentful. I've been there, and it truly stinks. :/
In my opinion, might be worthy of a meta rule: If the only result of a post is to discuss the value or general opinion of a particular public person, it is, by definition, not a post we would like in our conversation.
Selling snake oil? Of generative AI? Where most companies have failed to turn profitable just because these models are not as useful and reliable especially in "high value use cases" such as "read this 100 page insurance policy and tell me if my situation X is covered or not and in both cases under what clauses" kind of cases?
GPT-4 is absolutely incredible and even if we never get beyond it the world is a much better place with it than without it. It makes total sense to bet on the team that made this being the best placed people in the world to advance it.
I disagree strongly with this. GPT has:
- Precipitated a new class of computationally intense and expensive systems at a time when we desperately need to be focused on sustainability and reducing power demands/increasing efficiency.
- Devalued human labor without being high enough quality to truly replace it.
- Grossly violated an unspoken social contract of the internet by abusing the commons, leading to many people locking down their content.
- Flooded the internet with unverifiable noise that looks credible, making it harder to find high quality information and enabling scams and laziness.
At best this productivity tool is of neutral value to society. If you think it's a net good, I question your judgment.
- No, we just need better ways to generate electricity, which Sam Altman is also working on.
- If something is not replacing labor, it will not devalue it. In this case, GPT4 is replacing human labor, which is indeed devaluing those specific labors. But it is also unlocking new potential, which in the history of all technology has always been a net positive in the end.
- I wasn't aware of this unspoken social contract before OpenAI existed, but maybe I'm just ignorant.
- This is true, but this is also a trend that has been headed downward for a long time thanks to Google/SEO. The signal-to-noise ratio has indeed gone down due to GPT-powered blog spam, but honestly we needed to get our act together before GenAI anyway. This is actually lighting a fire under people's butts to find ways to avoid the AdSense/affiliate marketing fueled drivel.
Oh good, I'm glad Sam Altman, who doesn't even have a bachelor's degree, is on the case. I'm sure his efforts towards cold fusion will be appreciated.
> I wasn't aware of this unspoken social contract before OpenAI existed, but maybe I'm just ignorant.
It's shocking to me the number of programmers out there who simply did not realize everyone would be mad at them for leveraging everyone's work into a massive for profit system. Then they play stupid when people rightly called them out and spout some bullshit about outmoded forms of production as if productivity was an issue for the generation of culture.
Clearly he is not the one doing the physics himself; that is an uncharitable interpretation of the point (but I am sure you're aware of that). He is funding fusion research. It is reasonable to assume he is doing that in order for large-scale AI can be a thing without people worrying about the environmental impact.
Does OpenAI make a profit?
ChatGPT is offered by the for profit arm of the company. The relative success of that product with respect to it being profitable is irrelevant to it being for profit.
On the contrary, it is incredibly relevant whether or not the "for profit" system actually makes a profit. If ChatGPT does not make a profit, it is less "leveraging everyone's work into a massive for profit system" and more "leveraging everyone's work into a public good that is provided at or below cost, like a library".
I'm sure Microsoft invested billions in this company so it could become a library.
The intentions of some don't change the current reality for many.
It's easy to forget tangible win-win examples like this[1] or this[2]. Speaking from my personal experience, it's improved my life by reducing tedium associated with writing code. Small wins, but there's lots of them spread over many people. They are not abstract and it's harder to write grand narratives about them.
Microsoft are building over 10GW of firmed renewables which will partly power their AI datacenters. It is not all greenwashing and cynicism. As far as emissions go, GPT should not be put in the same basket as truly wasteful sectors like beef or crypto mining. Everything takes energy, including productivity tools like GPT. The focus should be on sustainable growth and scaling firmed renewables, which is the only politically realistic way out of the climate crisis. Degrowth can't work, either on a political level, or on a company level given the competitive capitalist system that they exist in. I find this[3] is a good discussion on that topic.> Precipitated a new class of computationally intense and expensive systems at a time when we desperately need to be focused on sustainability and reducing power demands/increasing efficiency.
I agree with you on this. It's not all rosy, but let's not let social media off the hook. LLMs have not made long-form journalism worse, for example. The problem is the interaction of LLMs with incentives created by SEO algorithms and social media.> Flooded the internet with unverifiable noise that looks credible[1] https://globalnews.ca/news/10463535/ontario-family-doctor-ar...
[2] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5478797/
[3] https://podcasts.apple.com/ca/podcast/is-green-growth-possib...
I don't buy the idea that there's nothing we can do about this with respect to whether we do it at all. I'm happy for Microsoft that they're building out green power infrastructure, but it would be better if that was put towards displacing fossil fuels rather than enabling extremely inefficient bullshit generators.
How exactly is the world so much better with GPT 4?
I'm not saying it's not cool tech or that the current level of LLMs don't have some impact, but "the world is so much better" is quite a stretch (unless you are a Nvidia shareholder)
Not to mention the adverse affect of job loss in industries such as customer service almost exclusively hurting people from poorer countries and lower socio economic levels.
Yeah, I mean I work inside a Fortune 10 company and after countless man hours across multiple teams we have exactly zero LLM applications in production and the pipeline heading to production is empty.
I guess it’s good at generating plausible blog spam and helping children with homework. I’ve used it to bootstrap my own writing. It’s not entirely useless but hardly world changing.
I think the biggest commercial use right now is Klarna uses it for basic lvl 1 support? I don’t know the details but it sounds like a good result from RAG over a fairly constrained corpus. So, again, nice but completely unaligned with the massive valuations in that space right now.
So the ends justify the means?
GPT-4 is peak LLM, nothing will likely be able to surpass it by these same methods for a long time. The only novel product OpenAI could release at this point is a fully uncensored and unrestrained GPT.
Introducing: GPT-ns4w
From what I can tell, all the GPT team did was scale up ideas already there invented elsewhere in previous large language models.
I am not sure that is a reason to bet on them making any breakthroughs needed for AGI.
OpenAI also pioneered reinforcement learning from human feedback (RLHF) to make LLMs output completions that are valuable/relevant.
I agree that OpenAI has provided no evidence they are anywhere close to AGI (and I don't think LLMs are sufficient) but also I think OpenAI should get a ton of credit for being the first party to make LLMs that actually useful.
Sam has always been very skilled at dealing with the media: making grand pronouncements, apocalyptic statements, save the word predictions, rag to riches hero stories. Which is fine. It's hard to get anyone to pay attention to anything, and he's got a nice playbook (and product) to get people to pay attention.
The more interesting question, I think: is OpenAI actually a good business? Can they generate the resources they need to meet their goals and keep control, without selling to big tech companies that will derail their plans? Do they have enough of a moat and can they benefit from network effect to make their products more valuable over time, without getting copied? They realised they need a lot more capital then they initially thought. Time will tell if Microsoft is able to take over - see what recently happened with Inflection AI: After raising $1.3B, Inflection is eaten alive by its biggest investor, Microsoft https://techcrunch.com/2024/03/19/after-raising-1-3b-inflect...
>(again without presenting evidence that historically extremely difficult problems are close to being solved)
This tells you everything you need to know about the author. Anyone that has solved difficult problems knows that evidence of being close to a solution is not a thing. In fact, by the time you're close, proving you're close is _harder_ than finishing the solution.
Gary Marcus has become attention seeking lately. I unfollowed him. Most of his posts were attacks on other people instead of genuine contributions on how we can make AI actually better and safer.
Easy to criticize, much harder to offer effective solutions.
Sam is a kind of man who has a pair of 2 in hand and he's boasting he will make all-in if someone raise above his 20$ after 4-7-9 appears in flop.
Why AI and belief in AI leaders has turned into a religion? A devoted cult?
Because both sides don't have conclusive arguments?
Hard to take Gary seriously calling out "outlandish claims" with "no substance" when he does the same thing in the opposite direction.
Garbage article for clicks to pay for his lifestyle, now that he's grifted his way into being an "AI Expert" paid to pontificate with no skin in the game.
Is this something different to any startup?
The current AI wave is a perfect application of Conway's law: the bullshit industry has generated the perfect bullshit machines, pretending to show intelligence when they only parrot what they've heard elsewhere - badly but convincingly.
Yes. But I predict we will eventually realize that's what humans do too.
> that's what humans do too.
Maybe so, but is it ALL humans do???? If not, then what about, how to do, the rest???
I don't understand why people are so eager to lap up this crap.
I mean, that’s too reductive. LLM’s can do all kinds of things, not just generate bullshit.
LLM can _pretend_ to do all kinds of things, and fall flat on their face as soon as they can't fake it.
Turns out pretending to do useful stuff is actually pretty useful.
I mean that's just describing a solid half of Hacker News comments on any given subject.
If not for the other half HN would be a very different place.
More than half
Does anyone know what Sam’s great super power is besides acquiring power? He may as well be another less successful flavor of Elon Musk.
I haven't taken anything Gary says seriously since like 2017. I'd recommend others do the same, you'll save yourself a lot of time.
If you know about the author of this post Gary Marcus you can just as easily ascribe accusations of fear, The Denial of Uncertainties, Hype and self-promotion/grifting
This post is not going to age well. In fact, when GPT-5 comes out soon it's going to look positively dumb.
Also GPT-4 is still leaps and bounds more capable than the competition. Anyone pushing large-language models to their limits today can easily attest to that. Claude 3 Opus comes close, but is significantly more expensive and much harder to do function calling with.
I wish I had a percentage of your childish optimism
My comment history about Sam Altman being a snake is aging well.
Approximately 8 billion people have accomplished less in their lives than Sam Altman... maybe criticize them instead? And so what if he's selling a vision of the future? That's a large part of entrepreneurship.
Accomplishing more != Benefiting Humanity
GPT 3.5 and GPT 4 have benefited quite a lot of software engineers at a minimum. I don't see why GPT 5 would be any different.
My point stands; nobody can mistake this for charity, Sam Altman is selling a product. He is just as conniving as any other businessman when it comes time to act in his own personal interests, which makes him no better than a business as a steward for populist change. I hope OpenAI collapses, so that the market is forced to pick a more responsible leader.
I think it also has hurt a lot of software engineers in a number of different ways. AI assisted programming tools are an experiment and the jury is still very much out.
He can go ahead and create openai, the second he creates it however, it should be taken away from him and become public domain or we need to have a change in system, a true AGI should be able to automate every single industry if so then, I think everyone knows what system is the only one left to implement once humans are no longer required to work and every single factory/company produces much higher output