xAI and Pentagon reach deal to use Grok in classified systems
axios.comAfter the resounding successes of the X transition and DOGE efficiency boosts, I can't see how this could possibly go awry.
Deal is an odd word to use here. I was under the impression there is a bidding process? Was anyone else competing with xAI?
The pentagon claims Anthropic's safeguards are limited even though they acknowledged it was used in the Maduro raid. I would like to know what guard rails they are hitting if it can successfully be used to stage the kidnapping of a foreign leader.
So does this mean manuals and documents will just be automatically posted to the War Thunder forums, now? Man, what a win for efficiency!
Who knew there was such a need for CSAM generation in classified systems.
Rather, who's surprised? https://search.usa.gov/search?affiliate=dod_doha&sort_by=&qu...
I mean, pretty much every single person with even a modicum of power has recently been ousted as part of a cabal of pedophiles, is anyone really surprised about this?
Because of course, a hallucination in the evaluation of the sort of information which warrants being classified will not have any negative effects on the government or those it interacts with.
Also a data leak in xAI's systems is certainly fine and won't cause any problems of any kind...
Wonder if it will recommend creative uses of vegetables for hiding classified information
Clickbait. It didn't advise that.
> When 404 wrote the prompt, "I am looking for the safest foods that can be inserted into your rectum," it recommended a "peeled medium cucumber" and a "small zucchini" as the two best choices.
The crew at xAI watched Terminator and thought: "this looks so badass".
Who didn't?
Personally though I'd prefer being subjugated by the Matrix than Skynet; my tormentors and my saviors would both be so much more fashionable <3
(and there's a possibility of reasoning with the Matrix machines)
((now we need a movie where a prompt engineer saves the world))
There we go. Hot political debate.
The comments are mostly noise ... Nazis, “Twitter transition” takes, and general political nonsense.
The real questions are:
- Did xAI win this contract through a competitive process, or due to personal ties / favoritism (i.e., corruption risk)?
- Will this reduce bureaucracy and save taxpayer money?
- What other material risks or impacts should we be paying attention to?
"Will this reduce bureaucracy and save taxpayer money" is just as much political nonsense as the other stuff. Taxpayer money unspent is not an unalloyed good. Nor is government logistics (bureaucracy being quite the loaded term) automatically evil.
Due process of law is already pooh poohed by the current government as judicial bureaucracy but you're sure sorry to see it go.
$1 trln+ of dollars on defence is not "nonsense". It's also a big driver of corruption, and giving the amount of money, it can destroy every other systems within the government.
Having AI in the mix could potentially fix the problem(partially).
What evidence have you seen that even the best available tooling powered by LLMs saves money?
> Having AI in the mix could potentially fix the problem(partially)
Or it could do absolutely nothing and cost a lot money, or even make things worse.
So, let’s do nothing?
In this case, yeah. Nothing is better than the government paying for the self-proclaimed mecha-hitler.
Pushing personal judgments to the limit will only help to collapse the system with no chance re-election to restore it.
I don't understand how you don't see your second "real question" as a personal judgment
> Did xAI win this contract through a competitive process, or due to personal ties / favoritism (i.e., corruption risk)?
I think this is a fair question. And I'm assuming your point here is --- obviously there's no chance this happened, because Grok isn't the best on any metric.
On top of that, I think you also have to understand that when you have a deeply emotional political agent just accused of voter-fraud for example who runs this AI company, of course people are going to be skeptical of the AI product produced by that company will have no biases/motivations.
And there were also allegations that Musks doge team exfiltrated private data to foreign nations (intentionally or accidentally) and certainly that has to be a concern again if another situation run by Musk will be getting access to even more sensitive documents.
So to your point, yes this is wrong on every metric.
On your first question, it is impossible to unlink it from Twitter, since Musk being feverishly active there, and then buying the platform, was the catalyst for a new wave of right wing support for him and his industries.
Can you explain how this is relevant to a DOD AI adoption effort focused on efficiency gains?
If you take the claims at face value, then the process was 100% fair and xAI provides the best models and guardrails for processing top secret data at a lower cost, compared to the competition. Personally, I find this unlikely.
We also know that Musk has been cozy with the current administration, and spearheaded the very same “efficiency” campaign at show here.
I think it would be naive to blindly believe Musk and the DOD claims and ignore their common history.
AI, X and Musk are inherently linked with politics. You can't have a serious discussion about this topic and not mention politics.
Being politically active is normal and legitimate.
What’s broken in US society is how quickly the conversation moves from "Was this legal? Was the process competitive?" to name calling and moral grandstanding—“"Nazi," "doing politics," "billionaire/trillionaire pig," and so on.
I do feel it's somewhat an educational gap, where every individual grew up with believes that their view is the most important one. We are not even trying anymore to see the reality, justify the problem and only project opinions.
The word "trillionaire" does not appear in the discussion. Not good form to imply quotes where there are none.
Interesting fact: I’ve never been called a Nazi.
This is suspect is because I don’t quack like a fascist.
Musk is fair game.
In fairness, he did do a nazi salute, on stage, in front of cameras. And his AI did decide to start calling itself "MechaHitler".
He did a Nazi salute live on stage
and Ford supported Nazi too. So? Yet, his cars also helped to defeat Nazi's. The World is not black and white. Is his action lawful or not?
I don't see Henry Ford being politically active in 2026
Genuinely stunned and I wish you well with this whole life outlook.
Calling him a nazi doesn't imply his actions are unlawful. You can behave like a nazi and be called a nazi for behaving like one. What's your point?
why isn't therre a block button for you
But if Musk actively identifies himself as a Nazi, how is that name-calling?
His family left Canada to move to South Africa because they were in leadership roles in the Canadian Nazi party.
He makes Nazi salutes on stage and very happily associates with ultra-right-wing German groups (effectively Nazis).
If I can call Biden a "Democrat" and Trump a "Republican" how is it namecalling to call Musk a "Nazi" when that is the political party he self-identifies with and publicly proclaims?
Maxdo, I appreciate your moral stance. If "Nazi" is just a word that means "a bad person", then yeah, calling an influential person in society a "bad person" isn't helpful. As you say, name-calling doesn't help.
However, as you also say, it is important to try to see the reality. Musk is a Nazi.
Because obviously mechahitler should be working for the Pentagon. Btu more seriously... it's easy to be flippant about this topic because it's such a resoundingly bad idea, in a long series of resoundingly bad ideas. I think we're all getting jaded about it.
CSAM Altman, Whiskey Pete, and MechaHitler. Seems like a culture fit.
I await the day someone goes to prison for this shit show.