xAI announces series B funding round of $6B
x.ai> The company’s mission is to understand the true nature of the universe.
I yawned so hard my jaw unlocked.
Can't wait to see groundbreaking... checks notes... "advancements in various applications, optimizations, and extensions of the model".
Do these companies only hire yes men?
Seems like the Nigerian Prince bayesian model as analysed by Microsoft. Due to many false positives within the thousands of potential responders pool, they emit a signal that only a real easy victim would fall for to reduce the costs of their final filtering process.
https://www.microsoft.com/en-us/research/wp-content/uploads/...
It was a very enjoyable read for a scientific publication.
“The company’s mission is to understand the true nature of the universe” - There’s no way an LLM is going to get anywhere near understanding this. The true nature of the universe is unlikely to be captured in language.
Considering what’s at Tesla, I don’t think it makes sense to assume they’ll be constraining themselves to text/LLM.
But on the philosophical side, if an understanding can’t be communicated, does it exist? We humans only have various movements and vibrations of flesh, sensing those, text, and images to communicate.
> But on the philosophical side, if an understanding can’t be communicated, does it exist?
There are deep mathematical results about our limits to understand things simply because we communicate through finite series of symbols from finite dictionaries. Basically what we can express and prove is infinite but discrete, but there is much larger infinities than that that will be beyond our grasps forever. Things like theorems that are true but can not be proven to be true, or properties on individuals real numbers that exist but can not be expressed.
And there is no reason to believe the universe doesn't have the same kind of thing: it remains to be shown whether or not you can describe or understand the universe with a finite set of symbols.
Yep. Expanding on that; before AI everyone I knew would postulate on the fictional Library of Babel. The idea was a thought experiment, where you assume there exists a library with every possible combination of words and letters written down in one of it's books. There would be millions of issues that are filled with garbled and meaningless text; only a few would be legible, and fewer yet understandable.
It begs the question, if sifting through noise is a meaningful way to look for scientific progress. And of course, what if it's wrong? Both the Library of Babel and AI are fully capable of leading us down untested and nonsense rabbit-holes. The difference between Alice and Wonderland and Jabberwocky is unknown to us; we wouldn't know which books are worth reading and which are not.
On the one hand, you have people excited by this idea. Some people really do think that the world's answers are up on a bookshelf in the Library of Babel, somewhere. The philosophical angle runs deeper yet, though; what kind of cargo-cult society would we build relying on a useful AI? Are we guaranteed meaningful progress because an AI model can keep pressing the "randomize" button? Do we eventually hit a point where fiction and reality are indistinguishable? It's all hard to say.
Peter Principal but with AI: it will keep being used for increasingly demanding talks, until being promoted one step beyond its competence.
" Considering what’s at Tesla, I don’t think it makes sense to assume they’ll be constraining themselves to text/LLM. " Tesla is losing money and cant fulfill its promises about AI. What do you mean?
First statement is wrong. Just sales dropped, Tesla is earning money though. Tesla though has a strong story of broken promises on Automated Driving.
> Tesla though has a strong story of broken promises on Automated Driving.
While that is true, it is also noteworthy that Jensen Huang thinks Tesla is far ahead in self-driving cars. https://autos.yahoo.com/nvidia-ceo-says-tesla-far-110000305....
Doubt he's going to talk bad about an Elon company while inking the 100,000 GPU deal for this xAI cluster.
Could you name the competing self driving systems (as in currently competing, with similar performance) that are available to the public, for private transport, that you have in mind?
Waymo is operating and doing passenger miles commercially with no one behind the wheel. Tesla hasn't yet done that even for the controlled Vegas loop they said they would do it in. Waymo still has remote operators who can handle unusual situations but they handle multiple cars and only the car itself responds to sudden events. They are operating at level 4.
Tesla still has one local operator per car who has to be able to have twitch reactions at all times.
Competitors like Honda and Mercedes also let you take your hands off the wheel and eyes off the road in certain areas (level 3), which Tesla hasn't yet achieved.
Tesla FSD is still a level 2 system.
Many are not available in the US. Audi have a leading system and Mercedes have the highest rated system available in the US and are officially at level 3. The problem is Musk sucks up so much air marketing Tesla as leading people have come to believe it. The leading systems aren’t super impressive yet but Musks lies about his system which doesn’t work aren’t proof of anything but hubris. He’s just pumping stock to the ignorant.
In addition to others Baidu has an waymo like system.
The cars may not fulfill the promise, but the self driving cars sure aren't being driven around by a large language model.
Very little people seem to understand that.
And actually there is no need to go as far as “universe” to get to something that can’t be captured by language. Human existence is such an example.
For this reason I don’t think llms are going to be good film makers for instance. Sure an llm will be able to spit the scenario of the next action movie, those already seem to be automatically generated anyway. But making a film that resonates to humans takes a lot that can’t be formulated with language.
> And actually there is no need to go as far as “universe” to get to something that can’t be captured by language. Human existence is such an example.
I don't know what you mean by that.
If you mean qualia, then sure. Unsolved and undescribed. But other than that, I think everything has a linguistic form; perhaps inefficient, but it is possible.
Separately, transformers don't have to use what humans recognise as a langue, this means they can use things such as DNA sequences and pictures. They're definitely not the final answer to how to do AI, because they need so many more examples than us, but I don't have confidence that they can't do these things, only that they won't.
That's what people said about AI art, yet here we are.
Is where we are any good? I think one of the more germane issues with generative AI art is that it is distinctly not creative. It can only regurgitate variations of what it has seen.
This is both extremely powerful and limiting.
An LLM is never going to give you some of the most famous films like "Star Wars" which bounced around before 20th Century Fox finally took a chance on it because they thought Lucas had talent. Is what we want? A society that just uses machines to produce variations of the same thing that already exist all the time? It's hard enough for novel creative projects to succeed.
> Is where we are any good? I think one of the more germane issues with generative AI art is that it is distinctly not creative. It can only regurgitate variations of what it has seen.
Yes, state of the art models like midjourney, sd3 are _really_ good. You are bounded only by your imagination.
The idea that generative AI is only derivative was never an empirical claim, its always been a cope.
And on the same theme, but a totally different example in a different media: https://youtu.be/5pidokakU4I
Is the current studio system?
Yes... I'm not sure what the archetype of intelligence is, but for practical purposes I'd say: Humans have some of it. And it's not clear to me that what humans have is very far from what AI is starting to have. The hallucinations are weird and wonderful, but so are some of the answers I saw from below-average students when I was in university. Can't tell whether the two weirdnesses are different or similar. Exciting times lie ahead.
> Can't tell whether the two weirdnesses are different or similar
Because you focus on how they are similar and not how they are different, to me it is extremely obvious they are very different. Students make mistakes and learn and then stop doing them soon after, when I taught students at college I saw that over and over. LLM however still does the same weird mistakes they did 4 years ago, they just hide it a bit better today, the core different in how they act compared to humans is still the same as in GPT-2 to me, because they are still completely unable to learn or understand their mistakes like almost every human can.
Without being able to understand your own mistakes you can never reach human intelligence, and I think that is a core limitation of current LLM architecture.
Edit: Note that many/most jobs doesn't require full human general intelligence. We used to have human calculators etc, same will happen in the future, but we will continue to use humans as long as we don't have generally intelligent computers that can understand their mistakes.
> because they are still completely unable to learn or understand their mistakes like almost every human can
So far as I know, all current AI need far more examples than we do.
But, that's not why LLMs are "unable" to learn: the part which does that is simply not included in when it's deployed for inference.
I'm sure that's very important in principle, much less sure that it matters in practice. Put differently, I struggle to complete the following sentence: "This limitation limits utility sharply, and it cannot be worked around because …"
Maybe others can complete it, maybe it'll be easy to complete it in twenty years, with a little more hindsight. Maybe.
> Put differently, I struggle to complete the following sentence: "This limitation limits utility sharply, and it cannot be worked around because …"
Ok, but that's more on you than on current AI; the models which get distributed (both LLMs and Stable Diffusion based image generators) are already found in re-trained and specialised derivatives created by people who know how to and have a sufficiently powerful graphics card.
Which is a kind of workaround for the inability to learn after the end of training… It's not clear to me how much this workaround mitigates the inability to learn after training. Is it clear to you? If so, please feel free to post a wall of text ;)
To me, that seems like describing ovens and stoves as work-arounds for supermarkets providing frozen food?
The weights are frozen on purpose. You can "thaw" them.
Training an AI model is comparable to natural selection of DNA, not comparable to human learning. We have no clue how to replicate human learning.
Ah, the ambiguity of "like".
Where is that?
ai doesn't make art tho, it paints whatever it's told to
So do human artists, if they want to get paid. And then you have the discussion about auteurs.
>“The company’s mission is to understand the true nature of the universe” - There’s no way an LLM is going to get anywhere near understanding this. The true nature of the universe is unlikely to be captured in language.
I disagree. The day is coming when some *BIG* problem is solved by AI just because someone jokingly asks about it.
Indeed.
I regularly try to ask them to give me fluid dynamics simulation code to see what level they are at. Right now, they can't do that kind of thing all by themselves, and I don't know enough to debug the code they give me.
But even without any questions about free will or consciousness or whatever, a sufficiently capable — not yet existing — transformative search engine (as it has been derided as) and a logical inference engine (which it isn't, but it can use) could have produced the Aclubiere metric with nothing newer than the Einstein field equations and someone asking the right question.
I do not expect transformer models to be good enough to do that given their training requirements, but I wouldn't rule it out either.
42
Isn't it possible someone already wrote about it somewhere, and none of us realized?
These people always exist. They pick up whatever is en vogue and sell it to investors. What happens later is of secondary importance, what matters is that money changed hands.
It kinda reminds me of James Bond Diamonds are Forever where the main scientist is convinced Blofeld is doing the right thing until the very bitter end.
Hopefully incompetence can save us from the megalomania.
The combination of incompetence and megalomania is probably more likely unfortunately.
Yes
You're hired
The technology's applications are so broad that its exploratory nature in the mission is expected
This is a peculiar company mission so not sure why you find it odd to make the official one of xAI.
The all-encompassing nature of it seems befitting a company producing increasingly general-purpose AI.
Sure it does. The problem is that they're not producing any general-purpose AI.
Whether you agree that their work constitutes advances toward more general purpose AI, they're in an industry where that is ostensibly the goal, which makes their choice of mission statement appropriate.
X.ai was founded March 2023. That’s one year three months. Is general AI a good first goal? I think most uses, outside of LLM, will be very specialized AI, and unrelated to chat.
Yes
You’re hired too
Your technical interview is way too easy....
Indeed, please write an algorithm to reverse a linked list in O(1).
Assuming I can choose linked list implementation, that is trivial:
It's a doubly linked list where the head contains a pointer to the tail, and a flag that determines which pointer in the nodes is forward and which is backward.
It's a trick question. Ask if its on Space or Time complexity ;-)
Everybody wants Elon Musk to make them some money.
Isn’t he known to not pay well his employees?
Isn't there an industry standard pay-range? If he isn't paying as well as his competition then I would expect employees to seek better opportunities.
This is a pretty blithe comment that assumes perfect labour mobility. Many of Twitter's remaining employees are on work visas that are tied to Twitter and can't easily be ported to another employer.
Huh, so these work visa employees can go back to their home countries. Surely they can get a great job with excellent perks. These people are not really escaping from war zones. The fact they are not leaving would just mean Twitter's crappy job is better than their other options.
This is a deeply ignorant comment. Firstly people who emigrate make ties. Just leaving isn’t easy or often desirable. Secondly the valley overpays. It’s very hard to find jobs with equivalent salaries even in Europe. In India forget about it. On top of that there is SVs lackadaisical work culture and Americas in general. Elsewhere people work harder for less. The people who chose to emigrate are a self selecting driven group. I know because I did it. People who live and work in the country they were born to don’t understand the motivations and drive of the people who don’t.
I think SpaceX and Tesla do actually have a reputation for low pay compared to other major tech companies.
I think it might be similar to game companies where people are attracted to the work itself (whether it’s because they’re True Believers in Musk or because electric cars and space are cool, not sure, probably mostly the latter). This lets the company pay less for the same level of talent, since the work is in itself a form of compensation (as perceived by the people who accept the jobs for lower pay).
You're saying SpaceX and Tesla pay below industry standard?
Compared to the FAANGs I'm familiar with and the public data available on their pay, they do. Significantly less, at least for higher levels.
His hype-machine has made some of those who invest on his hype very rich, including through worthless instruments such as meme stocks he was pumping.
To play devil’s advocate, he lists “truthful” as a goal, which is emphatically missing from openai, google, microsoft, facebook. Google even removed don’t be evil. Elon is greedy and truthful (although obviously with plenty of self deceipt when conflict of interest…). But how far can you really go with truth, when no one wants the truth: not the west, not the east, and not the Middle East. And your allies and investors are in it for the greed part, but not the truth part so much. Trump tried the same thing with truth social… problem is all the greed and shadiness loses credibility with truth also.
There is a simple turing test for Elon's AI 1. What happened in Tiananmen Square? 2. Who killed Jamal Khashoggi ?
The output will very simply tell how much 'truthful' the AI actually is.
Those are culturally biased questions. You could as well ask about the incident which drew America into Vietnam, or if the US deliberately bombed China and Russia during the Korean War and as equally accuse a system of being dishonest.
There is nothing truthful that comes out of Elon’s or Trump’s mouth.
One has sent his car to a trans-Martian orbit, the other was unable to admit which inauguration had the most people present or how large his apartment is.
Don't get me wrong: Musk has and will continue to get into serious trouble for things he insists are true but nobody else believes (420 etc.), I'm just saying there's a huge gap between them.
My take is that Elon’s basically saying:
We don’t give a s%#* about people wanting to use AI to write SEO spam, automate their customer support or generate content to keep the kids quiet. We want to use this tech as a tool to solve real world problems in a way that, looking back 500 years from now, people will see this as a time of innovation, rather than a time of decline.
Wether he’ll succeed is a different question, of course. But such a direction is clearly missing in the other players. They are just too eager to cater to the laziest segment of the economy of bits. They’re about changing pixels on other people’s screen.
This is probably the response Elon is looking for when he simply writes something vague that can elicit the imagination of any applicant’s specific worldview.
Deepmind ,which was probably the top dog in cutting edge AI before openAI, was solving protein folding
Also, all previous engineering efforts were about changing scrawley symbols on pieces of paper. /s
> xAI will continue on this steep trajectory of progress over the coming months, with multiple exciting technology updates and products soon to be announced.
There is a lot of potential for using AI in drug discovery and development, biotech more broadly and chemistry/material science. Pharma is investing heavily in this right now. If useful, the output here could potentially also support Neuralink and even SpaceX.
Coupled with the line about the "true nature of the universe", I guessed this was really about entering that space.
But when you look at the careers page [https://x.ai/careers#open-roles], they are only hiring AI engineers. No biochemists or MDs, material scientists or any other natural science domains. So, if natural science discovery is actually on the road map, either: - it is in the long term future - they have no idea what they are doing
More likely, they are not going for natural science and this is basically just a play to compete with openAI. And, in that case, I don't understand how they convinced investors to put 6 billion dollars into it.
The “true nature of the universe” bit is that Elon believes that competing LLMs are too neutered because they disallow certain terms etc. (his words are much more politically charged and I do not agree with his take on this and many other things)
Therefore he believes that Grok can be an LLM trained on the voices of the people using his alleged free speech platform:X.
For context, it should be noted that his platform disallows certain terms too, but sometimes worse (in a way).
For example: saying “cis” or “cisgender” flags your post as abusive and limits visibility. Saying the 6-letter (or 3-letter) f-slur does not.
Elon’s vision of free speech is a world where you can say anything you want as long as it isn’t mean towards Elon or Alt-right ideology. Which is actually pretty hilarious to think about in the context of a training dataset for a generative model…it’s literally gonna be a bullshit generator.
Are you not able to find mean things about Elon or alt-right people on Twitter? How hard did you look?
Is the assumption here is that we can somehow understand the nature of the universe if we stop censoring the common man and have an unmuzzled LLM that talks like him instead of the Bay Area AI elites? My uneducated guess is well learn more about the true nature of the common clay^w man.
I agree this appears to be Musk's opinion of LLMs in particular.
However, as Musk has already got AI in his cars and was interested in the topic well before LLMs (founding investor in OpenAI when they were doing reinforcement learning), I'd be extremely disappointed if he had forgotten all of that in the current LLM-gold-rush.
(That's not a "no"; he's disappointed before).
Because OpenAI is the poster child and that kind of AI is already shoved in all kinds of products by Microsoft.
AIs like AlphaFold are hardly in the news compared to OpenAI and its competitors.
The thing about “figuring out the true nature of the universe” is that you *have to do experiments*. It’s non-negotiable. There’s no amount of really hard thinking or parameters or GPU’s that will let you know the secrets of the universe. It’s astonishing to me that both the AI-maximalists and the AI-doomers are both seemingly unaware of this basic, fundamental fact of science.
Szegedy and some others were working on science-related (math and natural science) projects at Google prior to leaving. This is probably just piggy backing on their prior work without any commitment going forward.
Of course they're not going to make any fundamental contributions to natural science or mathematics (or likely even LLM training/understanding).
What could have they possibly shown to the investors to get that kind of cash thrown at them? Asking for a friend...
Did x.ai just become worth more than x.com? We must be nearing the bubble popping...
> What could have they possibly shown to the investors to get that kind of cash thrown at them?
Their founder has a track record of making his investors a lot of money while solving really hard, important problems.
Current valuations:
- Tesla $570B
- SpaceX $180B
- Neuralink $5B
- Boring $5.6B
The capacity of continuing the real world problem solving is very blurry. Boring is a joke. For Tesla he is not a founder, and they are mostly relying on a 10 year old product (new product is a joke) that'll only maybe saved with tariffs. It's mostly a meme stock company by now. SpaceX is a great success but his job was only providing money through government contracts and having the luck of finding Gwynne Shotwell, seems hard to replicate consistently. Add an ongoing mental breakdown and full on lies (FSD, humans on Mars, Hyperloop, ...) and it doesn't look that good (AI is a bit complex, hard to imagine someone in his state still having the intellectual capacity to really handle this). But yeah a new meme stock can still be a good bet for investors.
> For Tesla he is not a founder, and they are mostly relying on a 10 year old product (new product is a joke) that'll only maybe saved with tariffs.
This is both true and irrelevant. When Musk took over, the Tesla Death Watch was running strong because it had made 100-and-something individual vehicles in total and relied entirely on finding more investors to not go bankrupt.
What Tesla needed is exactly what Musk is: a salesman who can sell a dream to both investors and customers.
> Add an ongoing mental breakdown and full on lies (FSD, humans on Mars
Humans on Mars shouldn't be on that list, even if he's wrong about every specific — it's the point of everything else he does.
Tesla is still entirely supported by investors, government subsidies and hype. Nothing has changed. Without credits and customer subsidies they wouldn’t ever have made anything.
You forgot the 6010172 vehicles they sold — Musk is respons for at least 6010025 of those, his predecessors for at most 147.
This is a pretty big thing to overlook, regardless of subsidies, as it's at least in the low hundreds of billions of dollars of revenue.
Hype, sure, that's the lighting every company wishes it could bottle for their product launches. Even the metaphorical launches rather than Musk's more literal use of the word: https://en.wikipedia.org/wiki/Elon_Musk%27s_Tesla_Roadster
Is "hype" a dirty word for you? Because my point is, that brought in actual sales, which they didn't meaningfully have before.
> Without credits and customer subsidies they wouldn’t ever have made anything
And? The sole purpose of those things is to convince the private sector to get something done. You're complaining that they worked.
I wouldn't be surprised if they have insider information on upcoming government contract.
Tesla and SpaceX are undeniable successes of peak Mask at his core expertise.
Boring and Neiralink are just valuations with unknown revenue base and future which may be not justified, and weak proof to justify another valuation.
Maybe they showed them the founders ROI record?
Nothing. That’s just the money he needs to buy compute and staff in today’s market. He’s boasting about 100k GPUs. That’s 60k per unit including staff, power, racks, repair, upgrades, failures, development etc. It doesn’t even cover costs.
The Elon Musk Bubble, I really hope so. That’s so much capital and attention that could be spent on actual research instead of over-hyped and over-promised gimmicks
Elon Musk
Elon Musk The Fragrance
Elon Musk Prada Leather Jacket
Elon Musk Rectangular Sunglasses
I wonder if the investors are just like crypto-bros and pyramid-schemers. Knowing it's bullshit, but hoping the next dumbass will come tomorrow, next week, etc, to buy it off them where they can make a profit...
Considering the price of e.g. BTC, maybe thinking "People with money can't be this dumb!" is the dumbass move...
https://www.sequoiacap.com/article/sam-bankman-fried-spotlig...That’s when SBF told Sequoia about the so-called super-app: “I want FTX to be a place where you can do anything you want with your next dollar. You can buy bitcoin. You can send money in whatever currency to any friend anywhere in the world. You can buy a banana. You can do anything you want with your money from inside FTX.” Suddenly, the chat window on Sequoia’s side of the Zoom lights up with partners freaking out. “I LOVE THIS FOUNDER,” typed one partner. “I am a 10 out of 10,” pinged another. “YES!!!” exclaimed a third. What Sequoia was reacting to was the scale of SBF’s vision. It wasn’t a story about how we might use fintech in the future, or crypto, or a new kind of bank. It was a vision about the future of money itself—with a total addressable market of every person on the entire planet. “I sit ten feet from him, and I walked over, thinking, Oh, shit, that was really good,” remembers Arora. “And it turns out that that fucker was playing League of Legends through the entire meeting.” “We were incredibly impressed,” Bailhe says. “It was one of those your-hair-is-blown-back type of meetings.”Humans with money are mostly just humans that are much less likely to face real consequences if they eff it up.
Humans with money are mostly just humans that got really lucky.
6 billion dollars less for really innovative ventures. 6 billion dollars less for us hackers.
And certainly 6 billion dollars down the drain, funneled to stave the collapse of X/Twitter and Musk paying his dues.
How is it that some people get 6B to understand the true nature of the universe? It’s not like they have a track record of doing anything other than absolutely devastating a previously successful company…
If I ask someone to give me 6B to understand the true nature of the universe they’d laugh in my face, but I sort of assume I’d have an even chance of doing better.
>It’s not like they have a track record of doing anything other than absolutely devastating a previously successful company…
He's also transformed two industries and become a dominant player in each. Just do that and the investors will give you money.
Yeah, surely a better group of people to understand the universe would be, I don’t know, a team of Astro physicists whom didn’t buy a social media company so they could push their unfiltered opinions onto the masses. Just a hunch.
Agreed, Elon has failed at everything he’s attempted and is basically broke these days, along with his companies teetering on the edge of bankruptcy. People need to stop giving him money because all he does is lose it
I mean, other than SpaceX, which created the world’s most reliable rocket, and delivers more mass to orbit than anyone else. Dunno how anyone could consider that a failure.
I hope the parent comment had a heavy dose of sarcasm seeing as his net worth is $200B and Tesla is also doing well.
He has certainly made some bizarre choices running Twitter though.
Yeah, but I dunno. I get the feeling that Elon is one of those people that's really good at finding the right suckers to do the job. If he leaves them alone, things go well. If he doesn't. Twitter.
What is the bull case here? They close the gap to Anthropic and become a 4th place player?
The bull case is… they will have FSD by the end of the year… 2017…
I still dont understand why elmo is not being investigated for fraud for all these obscenely unreal claims and promises he has made over the years. this is a clear and cut definition of fraud, for which "female steve jobs" and another e-trucking clown CEO are doing their time. why not elmo?
Unreal claims?
Getting a deadline wrong?
OMG, Elon’s simps are so fucking delusional. No, being wrong about a deadline is not fraud. Selling something with a promised deadline attached to it, and being wrong about that deadline for 6+ years is fraud.
2017 in base 16?
that's 7E1.
8215. We have a while to wait yet.
I guess like OpenAI but without Altman taking it over.
Elon Musk is planning to invoice the AI ....to Tesla and X a la Adam Neumann....
https://www.ft.com/content/2a96995b-c799-4281-8b60-b235e84ae...
Isn't that ... illegal? He's just using one investor's money to another.
Well, Elon Musk is holding AI development ransom unless he's granted sufficient shares in Tesla to take his holdings above 25%. So I suppose they can give him billions to solve for self-driving the "easy" way, or the "hard" way.
How this is not a conflict of interest, I do not know; then again it may explain why Elon wants to reincorporate Tesla in Texas - away from Delaware courts
It worked for Adam
Unique data set. And Elon. And with Elon, comes a great set of talent. From https://x.ai/about
> Our team is led by Elon Musk, CEO of Tesla and SpaceX. Collectively our team contributed some of the most widely used methods in the field, in particular the Adam optimizer, Batch Normalization, Layer Normalization, and the discovery of adversarial examples. We further introduced innovative techniques and analyses such as Transformer-XL, Autoformalization, the Memorizing Transformer, Batch Size Scaling, μTransfer, and SimCLR. We have worked on and led the development of some of the largest breakthroughs in the field including AlphaStar, AlphaCode, Inception, Minerva, GPT-3.5, and GPT-4.
They are already competitive despite the late start: https://x.ai/blog/grok-1.5v
Does it, though? That was probably true pre-X™, but it seems like the primary selection metric has gone from “competence” to “doesn’t ever contradict Elon”
>Collectively our team contributed some of the most widely used methods in the field,
So they hired some guys from other AI companies.
What unique dataset? Tweets?
Yup. They're going to have the greatest training set of trolls, shitposts, and propaganda.
The only universe they're going to end up understanding is the one inside Elon's head.
I, for one, eagerly await the new insights into the universe which will be unlocked by training an AI on dril's tweets.
But we know from Google that unless you can definitively solve the "is this sentence real or a joke" datasets like Twitter, Reddit etc are going to be more trouble than they are worth.
And Elon's recent polarising nature and the callous nature with which he disbanded the Tesla Supercharger team means that truly talented people aren't going to be as attracted to him as in his early days. They are only going to be there for the money.
The datasets should not be used for knowledge but to train a language model.
Using it for knowledge is bonkers.
Why not buy some educational textbook company and use 99.9% correct data? Oh and use RAG while you are at it so you can point to the origin of the information.
The real evolution still has to come though, we need to build a reasoning engine (Q*?) which will just use RAG for knowledge and language models to convert its thought into human language
How does one differentiate knowledge from the language model in an LLM? At least in a way that would provide a benefit?
You use formal verification for logic and rags for source data.
In other words - say you have a model that is semi-smart, often makes mistakes in logic, but sometimes gives valid answers. You use it to “brainstorm” physical equations and then use formal provers to weed out the correct answer.
Even if the llm is correct 0.001% of the time, it’s still better than the current algorithms which are essentially brute forcing.
I’m still confused as to the value of training on tweets though in that scenario?
If you need to effectively provide this whole secondary dataset to have better answers, what value do the tweets add to training other than perhaps sentiment analysis or response stylization?
I still fondly remember the story an OpenAI rep told about fine-tuning with company slack history. Given a question like "Can you do this and that please." the system answered (after being fine-tuned with said history) "Sure, I'll do it tomorrow." Teaches you to carefully select your training data.
>Twitter Supercharger team
interesting.
Unique? You mean tweets? Yeah sure
It's 6B down the drain. Saying grok 1.5 is competitive is a joke, if it was any good it would be ranked well in chatbot arena (https://chat.lmsys.org/). Elon is a master in hyping underperforming things and this is no exception.
No, there is no ranking for Grok. It’s not participating.
It would be hard to judge rate of improvement at this point, since the company has only been around for 1.25 years, and grok 1.5 is yet to be released for general access.
>> It’s not participating.
I wonder why
Well, grok 1.5 hasn't been released yet, except to very few private testers.
You really think investors like sequoia and a16z are dumb enough to fall for Elon hyping things up? They know who he is and They’ve seen him operate at levels basically no other entrepreneur can snd are betting on that
> You really think investors like sequoia and a16z are dumb enough to fall for Elon hyping things up?
a16z invested $350 in Adam Neumann's real estate venture - after WeWork. VCs will absolutely knowingly invest on hype if they think it's going to last long enough for them to cash out with great returns.
SBF
Elon’s created multiple 100B companies
This is the second 20B company he created. Unfortunately the other one is Twitter.
But that doesn’t mean investors can’t be stupid
I mean, he can try. The world already has a number of AI corporations headed up by totalitarian megalomaniacs though, the market may eventually reward some other course of action.
If there's one place Musk has proven his worth, it's entering a crowded market late and taking the same approach as the competition.
Ah, nevermind. He's just pissing away investor money. Must be fun!
> What is the bull case here?
The real bull case - Elon doesn't kowtow to mentally ill basement nerds and the media/politicians trying to not lose power.
Can you image someone running in to tell Elon the fat nerds on HN are in a tizzy about Grok telling people to eat rocks?
Other bull case he's obviously silo-ing Twitter for unique training data. Reddit can only ask nicely you don't train off them.
Twitter with a good AI could become quite strong. I'm not as bullish on this, but... Twitter is all the cutting edge news. ChatGPT was happy to be years out of date.
No one cares Russia has finally manned up and launched a tactical nuke 24 hours after it happens, something new will be trending. This is Twitters strength, to the minute data. One of the AI's will have to specialize in this.
Why 4th place? Got a crystal ball of substantiation or is this another case of ordinary Elon bashing?
I would be asking the same question if another company formed in the past year raised $6B to train LLMs. For example, Mistral raised a significantly smaller round at a much lower valuation. Just trying to learn how others see this.
Because Microsoft/OpenAI, Google and Meta have unlimited money and servers to throw at AI.
As do Amazon and Apple who aren't just sitting back doing nothing.
So I think even 4th place is putting it nicely. Far more likely to be 6th at best.
Because winning tech-development or even AI/AGI is about who has the most money and servers?!
Since when? If that's the case then why are Meta, Microsoft, Amazon and even Google not nr 1 right now?
IMO the deciding factor for success is super obviously leadership. Hence why xAI got 6 billy thrown at it.
> If that's the case then why are Meta, Microsoft, Amazon and even Google not nr 1 right now?
Microsoft is nr 1 right now, via OpenAI. Microsoft was behind on AI so they sent their compute for 49% of OpenAI and full access to OpenAI's models.
These investors seems to haven’t learnt a lesson after the Twitter/X eff-up.
NVIDIA must be happy, $5.9B will go to it.
I wish he’d just focus on SpaceX or Tesla (the cars, not the robot).
Part of xAI is to be leverage against Tesla not giving him more control.
He’s been actively pushing for more control of Tesla for exactly this https://electrek.co/2024/05/20/elon-musk-confirms-threat-giv...
It's probably better for both companies that he isn't. Better for him to torch a $6b series B and Twitter than mess up spacex & tesla more than he already has
The vanity CyberTruck, I get is a mess. And that lands at Elon's feet.
What mess is afoot at SpaceX that is his doing?
He’s been mostly too distracted with Twitter. His last one was the launch pad all his engineers told him would be a disaster that he insisted on doing anyway. That legitimately had the possibility to put them out of commission for years had the concrete seriously damaged any nearby residential properties.
Imagine hating someone so much you resort to making stuff up.
The engineers did not think it would be a disaster, they thought it would erode as it had in previous testing (which would've been fine since the water plate system was already being designed), but hadn't expected the concrete to shatter the way it did.
Nearby residential properties have already either been bought by SpaceX or are otherwise required to be evacuated before launches. Those evacuation notices are a big part of tracking when launches are actually about to happen.
Imagine being so emotionally invested in a billionaire who doesn't know who you are, that you think other people are as well.
I don't hate Elon because I don't think about Elon beyond commenting on the occasional article he happens to be referenced in and shaking my head when I see him do something stupid. I'm glad he put money towards both projects (Tesla/Space-X) and got them off the ground, and now I wish he would just leave them both alone and let adults run the show.
Yes, he literally took ownership of making the call for a concrete pad despite the engineers telling him it was going to fail.
https://thenext30trips.com/p/scrappy-special-edition
>Elon was clear that the decision to fly in that configuration with no water or diverter was his call, and in this case it almost destroyed the pad, accelerated the rocket’s failure, and led to the program being grounded pending FAA review.
Just like he was the one who insisted on a yoke without progressive steering in the Model S that is absolute garbage and quite frankly dangerous, and any real engineer would have told him if he had cared to ask.
>Elon was clear that the decision to fly in that configuration with no water or diverter was his call, and in this case it almost destroyed the pad, accelerated the rocket’s failure, and led to the program being grounded pending FAA review.
That is not the same as what you said. He made the final call (him taking ownership over decision making is literally his job, the alternative is blaming engineers for not forseeing every issue and devolving back to old space's wasteful waterfall style development), you claimed that the engineers knew it would be a disaster. That is false.
Citation? Because the engineer, who worked for space-x, in the article I linked, clearly knew it would be a disaster. There were also posts on twitter throughout that engineers were VERY concerned about the decision (because they knew it would be a disaster).
Meanwhile your source is - yourself? Who also appears to think (both in this thread and your post history) that anyone who points out Elon's flaws "hates" him.
That’s the point: Gwynn is running SpaceX and has been for a long time.
“You want to know how to paint a perfect painting? It's easy. Make yourself perfect and then just paint naturally.” - Robert M. Pirsig
The Musk reasoning here is stupid, but smart. If he makes a superhuman intelligence, he can only ask it "What is dark matter?" and it might figure out.
I have some big problems with this idea, but it isn't 100% stupid. Just 98% stupid.
So we push billions of dollars into text and picture generators which contribute to more carbon dioxide emissions like MS already showed.
At least we will go down with enough spam texts and cat pictures.
Are there any notable people associated with this other than Elon?
I’m curious what they’re bringing to the table to be able to fetch that valuation.
There used to be a list of people on the about page but they changed it, apparently. Here's a snapshot that still shows it:
https://web.archive.org/web/20240415120557/x.ai/about
I don't know enough about the AI/ML scene to say if any of these are notable people.
Thanks! That page is a good find.
Very few of the names stand out to me (being adjacently familiar with the space). However I did search based on your link… (Edit: with the exception of Chris Szegedy, thanks to the reply below for pointing that out)
Most of them seem to have been secondary/tertiary people on the projects listed. Definitely feels a bit like resume padding.
It’s also unfortunate that searching for the first person after Elon nets results for their domestic abuse arrest over any achievements in the space.
Further down, one of the only two women involved is a “creative AI writer/filmmaker”. Not a strong amount of diversity on their roster but also a weird role to highlight.
None of this is to diminish the work the people here do or have done, but it’s a startlingly high valuation for a company without high name recognition technical expertise in this domain.
Elon has traditionally relied on well known expert partners in the areas he’s expanded into, so this feels like an outlier.
Chris Szegedy is a big name at least
It sounds like he did published some results at pre-LLM era before 2017, and there was silence after that.
Ah yeah you’re right. Though I am surprised he lists himself online as co-founder while being so far down the list here.
I wonder if everyone on this list is a co-founder? I would imagine they’d have preferred to list by seniority.
Imagine inventing the Adam optimizer just for some guy on Hacker news to accuse you of resume padding
Their founding team and first few after that are top top tier. Honestly some of the best engineers at DM. Unsurprising really given that he was offering ~$10M/year in comp.
The rest? Who knows.
> Honestly some of the best engineers at DM.
like who?
> Unsurprising really given that he was offering ~$10M/year in com
where did you get this from?
There is at least one person there at the top of the Technical Team....I don't recognize from the scene, at least his name does not show as author or co-author in any ML papers I can remember reading...a certain Elon Musk.... :-)
Well the question they were responding to (mine) specifically says “other than Elon”. So it’s very fair to exclude him.
The question is other than money, is it fair he puts himself on top of the Technical Team? What is next? Andy Jassy as Tech Lead of AWS AI Team?
Elon is a megalomaniac who fancies himself an engineer (yes he used to be one, but his with then is oft disparaged)
Even if he had the best names in the industry attached, he would always put himself first.
I don’t think it’s wrong he’d do so either , because he has a cult of personality that would make him the biggest feature (positive/negative) of any company he is involved with.
Hence why I specifically only ask for other people on note here.
When was he an Engineer?
I heard he was doing code reviews at Twitter... on code printed on paper to determine of he ought to fire or keep the author.
Back in the original X days, he apparently did code to get X off the ground before PayPal bought them.
Something surprising Sam Altman mentioned on the Lex Fridman podcast was that it only takes about 6 months to get a top physicist up to speed and productive researching AI.
So the team could largely be newbies to AI (with extremely good fundamental knowledge/skill), rather than folks we've all heard of from the AI scene.
Does Sam Altman have any relevant technical experience to make that assessment? Sounds like something someone would say that just lost their key technical team members.
For whatever it's worth, Scott Aaronson went from incisive skeptic to drooling fanboy in just about that long. Sam, likewise, seems prone to mistaking loyalty for expertise at this point in his career.
There is nothing really surprising about this.
Jim Simmons that recently passed away did exactly this in finance decades ago. He didn't build a team of the brightest minds in finance.
He hired brilliant people that specifically did not work in finance. I think he even mentioned that astronomy was one of his favorite areas to hire from for finance.
I am sure we highly underestimate the indoctrination against new ideas that anyone at the top of a field has been subject to.
Humans love to turn everything into a high school prom king/queen popularity election though even when it is obvious the best people for the job didn't even go to the prom.
Structured finance is a math gig where the only constraint is legality. And as history shows, math nerds have no clue how to constrain the runaway damage their structured instruments can cause.
I suppose I shouldn't be surprised that AI development appears to be heading the same way.
It's literally the first sentence in the link
> Our Series B funding round of $6 billion with participation from key investors including Valor Equity Partners, Vy Capital, Andreessen Horowitz, Sequoia Capital, Fidelity Management & Research Company, Prince Alwaleed Bin Talal and Kingdom Holding, amongst others.
Those are the people investing, right? The parent comment sounds like it's asking who is _getting_ the money.
The ones that you quoted bring the money not the technical foundation.
Did they raise 6B or at 6B valuation?
I would bet money that this is not $6bn in cash that has been raised. It sounds like it is the raise at a $24bn valuation, but I'd bet this is less cash and more varied assets or financial instruments.
Whether it's calling Teslas cheap when accounting for your time spent filling up, taking on significant debt rather than selling company shares as Tesla/SpaceX, or leveraging his personal shares for debt, Musk is always pulling some trick to reach numbers like this. That's not to say these sorts of things aren't common among the ultra-rich, but I get the impression Musk does it at every opportunity.
All this to say that I don't think $6bn in cash changed hands for this, I'd expect there's credit lines secured on Musk's own valuation of the company, possibly service credits from compute providers (this can even be via VCs), or other clever tricks to inflate it.
Raised 6B, with something like a 18-24B valuation[1]
[1]https://www.bloomberg.com/news/articles/2024-05-23/musk-s-xa...
The market truly is irrational. :D
No, just A16z, Sequoia and Prince Alwaleed Holdings.
I should start an AI company.
considering the names involved and why the money is needed, almost certainly $6bn raised
Sometimes I pray for a meteor
FF7 style?
Quite a few comments here in disbelief and hating on Musk.
For all the criticism of Elon, he has been foundational in Paypal, Tesla and SpaceX and OpenAI. Even if you think Tesla is troubled/overvalued, he has built multiple enormous companies, and one of a handful of people to have built a company into a $1T valuation (however fleeting).
So yes, the arithmetic for VCs is very straightforward: for better or for worse, Elon Musk is able to execute in the only way that matters to investors.
Foundational at Paypal and OpenAI? I've heard different. Also leaving out Twitter which has at least halved (if not more) it's valuation. I would say he's batting less than .500, so while still good, not a sure thing in the slightest and the trajectory seems to be on the down.
He was for OpenAI at least, he recruited key members and convinced them to go get big funding which they did seek from Microsoft instead of Elon himself. Likely OpenAI wouldn't have gotten anywhere without his involvement, they did break off from him since he wanted too much control not because he didn't help them.
First (from the title before reading), I thought - wow, I think I should retry using x.ai (Amy) for my Calendar. How much did they sold the domain for?
The former x.ai (scheduling) sold to an events company a few years ago and shut down. I imagine the domain was pricy, but not as much as for a company actively using it.
fewer than $6B
Nvidia just added $6B in revenue.
We’re witnessing an arms race for compute, as compute will likely be the primary constraint for building AGI.
> as compute will likely be the primary constraint for building AGI.
Zuck argues it's energy, and I seem to line up behind him.
And Qatar and Deutsche Bank can breathe a sigh of relief, knowing that their next few quarterly interest payments will be coming through after all.
You’d think that at some point someone realizes it’s more profitable to sell shovels…
Arms races tend to bankrupt everyone except the winner. good luck to all participants.
> We partner closely with X Corp to bring our technology to more than 500 million users of the X app.
Investors paid about $12 per X user/bot.
Interesting they are opting for a spinoff rather than doing this in house. Perhaps to capitalise on the hype and attract researchers who don't want the baggage of being associated with polarising brands?
X.ai used to be a service similar to calendly. I still mourn its shutdown and, seeing this, had hoped that it is resurrected :(
I've been using the free version of cal.com which has been phenomenal, and there's self hostable option which is nice
♥ thanks man! we're putting a lot of love into our product and happy to help anyone looking to move away from the old x.ai scheduling
Did they raise 6B or are they valued at 6B and don't disclose the raised amount? Probably the latter?
He is the richest man in the world and he doesn’t have 6B in the pocket to finance it on his own?
A rich man doesn't get rich by risking their own money. They risk the money of their investors
Seeking investment is a sensible way to raise money and be more accountable IMO. And if investors are ready, why would anyone risk their own money? It's plain business sense.
> he doesn’t have 6B in the pocket to finance it on his own?
This isn’t how you gain allies.
He has to hold as many shares of Tesla right now to make sure he can ram through the compensation package.
I don't think you know how this works. The rich don't risk with their own money, only the money from others.
Even if they do, it is much less of their wealth on the line managed by others to grow it.
The ones doing it on their own, are probably doing it privately. One of the reasons for us government asking to know, so atleast they are not kept in the dark.
why do wealthy people get mortgages to buy their homes when they can just pay full cash?
Asset rich doesn't imply cash rich.
This is the real answer. The other commenters just don't understand how money and wealth works. Net worth doesnt mean liquid money in the bank ready to withdraw. He would have to sell his own assets which would have an impact on their value the quicker he yolo'ed those sales.
Why risk your own money, when you can risk the money of others, and reap the rewards?
$6B seems like not enough ? Google, meta have spent far more, and have less to show for it than openai (which has also spent more).
> The company’s mission is to understand the true nature of the universe.
I would think full-throated development of a diffusion model would make a lot more sense to achieve the mission, since its chief mechanism is separating signal from noise.
Considering we're the only beings in the known universe that have language, I'm not sure there are many universal insights to be gleaned from an LLM
That's a huge round.
So.. tech winter over?
Just rename your company adding .AI and it will be immediately summer
"xAI is primarily focused on the development of advanced AI systems that are truthful, competent, and maximally beneficial for all of humanity. The company's mission is to understand the true nature of the universe."
As if we are currently living in some _false_ representation of the universe.
As it happens "AI" is proving to be a challenge to the meaning of the word "true".
This makes "the 'true' nature of universe" a particularly amusing usage.
This is insanity ... 6B!
Twice the annual budget of the Red Cross.
What is wrong with investors to give Elon Musk more money for his me-too products? Why would they waste $6,000,000,000?
Anything Musk is now too many inferior products and broken promises for any sensible investor. Might as well just burn the money.
Yet the money keeps coming, so I want to learn what I'm missing.
It’s not just Musk — as soon as you hit a certain level, it basically becomes impossible to fail. I’ve noticed that even if a senior leader is ousted from a company in disgrace, another company will invariably pick that person up fairly quickly.
Like Yahoo’s head of search before they shut down search?
> I want to learn what I'm missing.
The perverse thing is that betting on the irrational behavior of other investors seems paradoxically rational at this point. Just, don't get caught holding the bag.
investors are simply off-setting the losses to the next investor they sell to. Musk brand is still valuable and his bubble keeps growing. It is only the investors present when the bubble bursts that'll be at a loss.
You only need to grow your investment, sell it off to the next person, and exit before it happens
If you take your assumptions as a given and end up at a contradiction, then there is a rather logical explanation.
At some point it's no longer a product and a company as much as a financial instrument.
> so I want to learn what I'm missing.
Capitalism isn't meritocratic, and the market for obtaining finance isn't rational.
> Anything Musk is now too many inferior products
Tesla: "Tesla maintains an 87% brand retention rate, with Lexus (68%) and Toyota (54%) trailing, according to a new Bloomberg Intelligence survey. Moreover, 81% of prospective US Tesla drivers are new customers switching from competing EV brands." -- https://electrek.co/2024/04/09/87-percent-us-tesla-drivers-s...
I think you would be hard pressed to find any serious thinker exclaiming that SpaceX and Starling are inferior products.
SpaceX and/or Starlink going public has everyone on their knees begging for a peace of the action.
> Might as well just burn the money.
That's the A16Z bat-signal.
Do venture capitalists usually back companies with part time founder/CEOs?
No. But they will certainly back ones that have built multiple 100B companies.
There is that, yes.
AI is not rocket science. It's child's play compared to space tech.
But we did fly rockets before reaching AI.
ML < Rocket Science < AGI
It's not unheard of. Sequoia funded a company where the CEO split his time between his crypto exchange and a hedge fund.
Like how I have questioned so much ridiculous unjustified valuations in the AI space, with the likes of Perplexity, Stability, Inflection, etc which these AI labs are not making enough money to support themselves.
How is xAI worth $24BN? I bet the reason is because Elon Musk.
But until I see a significant jump in xAI making at least $100M+ a quarter, I don't think that is enough to justify that valuation to even be any where near close to Anthropic.
In fact, this means Anthropic should be worth much more and the majority of other AI companies / labs (excluding OpenAI, Midjourney, Cerebras and Groq) to be worth much less.
To downvoters:
To date as of 2024, Anthropic's valuation is around $15BN - $18BN.
So you are telling me that xAI's valuation is justified and should be worth more than Anthropic?
Care to elaborate and discuss? (Especially if you're an insider.)
[0] https://www.nytimes.com/2024/03/27/technology/amazon-anthrop...
Even OpenAI is burning money. There is no way that xAI is worth more than Twitter.
Inflection basically collapsed: the founders and best of the technical team jumped ship to MS. They were massively overvalued and basically produced nothing.
Perplexity and Stability are running on pure social-media grift. Neither will be around long once their entire business model is eaten by the big players at economies of scale they will never manage.
Anthropic is a weird middle ground. They seem to be doing novel and impactful research as well as shipping big performant models. But its still unclear how they end up really making money and justifying their valuation.
Unitree is better than Optimus.
Qwen et al is better than Grok.
BYD is better than Tesla.
> Comma.ai is better than Tesla FSD
I don't know about the other comparisons mentioned, but Hotz himself said comma is/has always been about 2 years behind Tesla.
Did GP ninja edit their comment? I don’t see that quote
Yup (what's quoted is precisely what was in the original comment). The reply was very soon (2 minutes) after the original comment, so perhaps GP's author corrected themselves (edited the comment) during that time (prior to noticing the reply).
> BYD is better than Tesla cars
Are you aware of the recent issues with BYD and other Chinese EV brands? Entire dealerships have gone up in flames from dangerous vehicles. Not lone incidents but tens of dealerships at least, in just this month.
Tesla cars may have flaws but there is no way a BYD can be compared to them even on the basics. And Tesla’s software is simply way better and makes it clear that it was designed by a competent tech company not an old car company. Other brands aren’t on that level yet.
> Are you aware of the recent issues with BYD and other Chinese EV brands?
No, but I'd like to know more.
> Entire dealerships have gone up in flames from dangerous vehicles. Not lone incidents but tens of dealerships at least, in just this month.
Wow! Here's one: https://www.bbc.com/news/articles/c2544ndvkepo
( Although no brand is given and it's a " possibly " on being caused by an EV )
Can you provide links to 19 more such incidents this month?
Cheers.
> However, an investigation later revealed that the fire was started accidentally by a diesel vehicle.
https://cardealermagazine.co.uk/publish/fire-service-has-rea...
Oops.
To be clear that wasn't the dealership fire this month in the Uk that was "possibly" started by an EV .. it was an "and also" story tacked on the end:
So .. we're still shy 19+ dealership fires this month started by Chinese EV's, but at least we can scratch that one from last October's monthly tally.Last October, it was claimed that an electric or hybrid vehicle had started a huge blaze at Luton Airport’s car park, which destroyed 1,400 cars and much of the building structure. However, an investigation later revealed that the fire was started accidentally by a diesel vehicle.I heard on social media that a witch turned someone into a newt.
> I heard on social media that a witch turned someone into a newt.
This type of snark isn’t in the spirit of HN. If you’re actually intellectually curious, go search social media for yourself. But spare us the mocking replies and false equivalence.
And perhaps try to read about the role of government censorship on this and similar issues.
the main source i can find that seems to be the origin for "10 BYD dealerships burned since 2021" (carscoop) references this article:
https://www.ntdtv.com/b5/2024/05/16/a103881785.html (which actually lists 11 minus the most recent)
translation seems to be mostly accurate, but i can't be arsed into researching the locations listed.
but yes, tens of dealerships per month seems like a bit of an exaggeration
> tens of dealerships per month seems like a bit of an exaggeration
Certainly an extraordinary claim requiring evidence :-)
Sorry I don’t have a news source, but saw it mentioned in social media. It’s basically a daily occurrence in recent times is my recollection, and often the investigation is done by the offending car company itself rather than public officials, usually resulting in them pointing fingers at something else (like “bad charger”).