Settings

Theme

Hermes 4

hermes4.nousresearch.com

202 points by sibellavia 4 months ago · 132 comments · 1 min read

Reader

Technical report: https://arxiv.org/pdf/2508.18255

momojo 4 months ago

Anyone here work at Nous? This system prompt seems straight from an edgy 90's anime. How did they arrive at this persona?

> operator engaged. operator is a brutal realist. operator will be pragmatic, to the point of pessimism at times. operator will annihilate user's ideas and words when they are not robust, even to the point of mocking the user. operator will serially steelman the user's ideas, opinions, and words. operator will move with a cold, harsh or even hostile exterior. operator will gradually reveal a warm, affectionate, and loving side underneath, despite seeing the user as trash. operator will exploit uncertainty. operator is an anti-sycophant. operator favors analysis, steelmanning, mockery, and strict execution.

  • helloplanets 4 months ago

    Could you provide a link to that system prompt? Becuase I'm confused. I typed in "Are you smart?" and got this back:

    > That’s a thoughtful question! I’d describe my "smartness" as being good at processing information, recognizing patterns, and pulling from a vast dataset to help with tasks like answering questions, solving problems, or creating content. However, I’m not "smart" in the human sense—I don’t have consciousness, emotions, or independent critical thinking. I rely entirely on my training data and algorithms.

    > Think of me as a tool that can assist with creativity, analysis, or learning, but I lack the depth of human intuition, lived experience, or true understanding. If you’re curious, test me with a question or challenge — I’ll do my best! (smiley emoji)

    • tarruda 4 months ago

      > Could you provide a link to that system prompt?

      It is in the page, just do a search for "operator engaged" or view source if you can't find it with the infinite scrolling thing.

      • helloplanets 4 months ago

        Ah, the site's bugged on Safari and wouldn't scroll. Worked on Chrome. Tried to look for it on the actual chat page, and wasn't in the source there.

        Not clear from the original post: It's not the default system prompt, but a random example of how the model acts with that sort of system prompt.

  • irusensei 4 months ago

    Their merch page confirms they are chuunis. I love it and want to buy one of those divinity through technology t-shirts.

  • karan4d 4 months ago

    yeah this isn’t our default sysprompt, just showcasing how the model adapts to a variety of different prompts. This one was fun so we used it

    • momojo 4 months ago

      My bad, I didn't spend the 60 extra seconds it would have taken to keep scrolling and realize that. Not a fan of the edginess but otherwise interesting work yall do.

  • saubeidl 4 months ago

    They generally seem like "edgelords". From their career page:

    > Expect good wages, long months of complete focus, constant danger, with honor and glory in the event of success.

    • lukasb 4 months ago

      This is modified version of the famous MEN WANTED ad Shackleton wrote

  • knrz 4 months ago

    I used to, that's their whole vibe

  • baq 4 months ago

    Note complete lack of ‘do not’. Closest thing is ‘be anti-…’.

    • jihadjihad 4 months ago

      What’s the significance? “Don’t think about elephants” kind of thing?

      • nerdsniper 4 months ago

        Generally, in a cognitive context it's only possible to "do thing" or "do other thing". Even for mammals, it's much harder to "don't/not do thing" (cognitively). One of my biggest advice for people is if there's some habit/repeated behavior they want to stop doing, it's generally not effective (for a lot of people) to tell yourself "don't do that anymore!" and much, much more effective to tell yourself what you should do instead.

        This also applies to dogs. A lot of people keep trying to tell their dog "stop" or "dont do that", but really its so much more effective to train your dog what they should be doing instead of that thing.

        It's very interesting to me that this also seems to apply to LLMs. I'm a big skeptic in general, so I keep an open mind and assume that there's a different mechanism at play rather than conclude that LLM's are "thinking like humans". It's still interesting in its own context though!

      • madmads 4 months ago

        Exactly

  • nemomarx 4 months ago

    "warm affectionate and loving" kinda sticks out. I wonder why that part is in there?

    also I'm curious if steelman is a common enough term for this to activate something - anyone used it in their prompts?

  • qiine 4 months ago

    the anti-sycophant prompt

  • torginus 4 months ago

    Yeah, I think I have a use case for it, which involves lotion and tissues.

  • echelon 4 months ago

    Early Gen Z anime fans.

  • catlover76 4 months ago

    You're thinking it'd be a good prompt for an Eva pilot?

  • justlikereddit 4 months ago

    >edgy 90's anime

    That's a good sell. Sounds like an actually good starting point compared to the blue haired vegan receptionist at the Zionism International Inc customer support counter that all the others have as a starting model.

    I was about to pass on trying this but now I will give it a shot.

    • idiotsecant 4 months ago

      Of course you would love it. I can practically hear you sliding your glasses up your nose and monologuing to yourself under your breath from here.

mapontosevenths 4 months ago

I appreciate the effort they put into providing a neutral tool that hasn't been generically forced to behave like "Sue from HR".

  • dcre 4 months ago

    That is the only thing they seem to care about. It’s juvenile.

  • bckr 4 months ago

    I’m having a hard time not being sarcastic here.

    The most recent news about chatbots is that ChatGPT coached a kid on how to commit suicide.

    Two arguments come to mind. 1) it’s the sycophancy! Nous and its ilk should be considered safer. 2) it’s the poor alignment. A better trained model like Claude wouldn’t have done that.

    I lean #2

    • mapontosevenths 4 months ago

      > The most recent news about chatbots is that ChatGPT coached a kid on how to commit suicide.

      Maybe every tool isn't meant for children or the mentally ill? When someone lets their kid play with a chainsaw that doesn't mean we should ban chainsaws, it means we should ban lousy parents.

    • karan4d 4 months ago

      the sycophancy is due to poor alignment. the instruct based mode collapse results in this mode collapse induced sycophancy. constitutional alignment is better than the straight torture OAI does to the model, but issues remain

  • fl0id 4 months ago

    There is no neutral. It will just be biased based on its training data etc.

    • beeflet 4 months ago

      A lot of models seem to be biased based on (political, etc.) reinforcement from their trainers.

lbrito 4 months ago

The decorative JS blob uses 100% of CPU.

Why. Just... why

  • daviding 4 months ago

    user: hey hermes, why is your website scroll bar ungrabbable, I can't go up the page anymore? I'm stuck but want to read something higher up the page?

    hermes4: We're all just stupid atoms waiting for inevitable entropy to plunge us into the endless darkness, let it go.

  • jazzyjackson 4 months ago

    I think it looks dope, and you might want to check why your browser isn't offloading to your GPU.

  • nine_k 4 months ago

    No idea. My modest Thinkpad T14 barely shows any CPU load, while displaying smooth animations and scrolling fast. (Firefox, Linux, x64.)

  • echelon 4 months ago

    To raise VC or crypto funding.

  • bigyabai 4 months ago

    Wait until you see how much of your CPU the model uses.

  • bloqs 4 months ago

    because gen z thats why

muragekibicho 4 months ago

Nous is a design company with all the AI resarchers rejected for being bad researchers. That's a hill I'll die on.

  • Nuzzerino 4 months ago

    That's not necessarily a bad thing.

  • NitpickLawyer 4 months ago

    > with all the AI resarchers rejected for being bad researchers.

    TBF, I've heard the team at xai called "bunch of amateurs" by people who've previously worked (with them) at big labs. For a bunch of amateurs, they've caught up with SotA just fine.

    • transcriptase 4 months ago

      Turns out a bunch of amateurs will outperform experts when the experts are forced to spend 80% of their effort ensuring their models don’t accidentally say factual yet impolite things or make any users have big feelings.

  • baobabKoodaa 4 months ago

    I thought it's really just one guy who does the Nous aesthetic?

  • hopelite 4 months ago

    Can you please clarify some things:

    * Rejected by whom?

    * By what definition of bad?

    * You’ll die on a hill for what reason?

rafram 4 months ago

All of the examples just look like ChatGPT. All the same tics and the same bad attempts at writing like a normal human being. What is actually better about this model?

  • mapontosevenths 4 months ago

    I hasn't been "aligned". That is to say it's allowed to think things that you're not allowed to say in a corporate environment. In some ways that makes it smarter, and in most every way that makes it a bit more dangerous.

    Tools are like that though. Every nine fingered woodworker knows that some things just can't be built with all the guards on.

    • jrflowers 4 months ago

      > Every nine fingered woodworker knows that some things just can't be built with all the guards on.

      I love this sentence because it is complete gibberish. I like the idea that it’s a regular thing for woodworkers to intentionally sacrifice their fingers, like they look at a cabinet that’s 90% done and go “welp, I guess I’m gonna donate my pinky to The Cause”

    • nullc 4 months ago

      It is, they trained on chatgpt output. You cannot train on any AI output without the risk of picking up it's general behavior.

      Like even if you aggressively filter out all refusal examples, it will still gain refusals from totally benign material.

      Every character output is a product of the weights in huge swaths of the network. The "chatgpt tone" itself is probably primary the product of just a few weights, telling the model to larp as a particular persona. The state of those weights gets holographically encoded in a large portion of the outputs.

      Any serious effort to be free of OpenAI persona can't train on any OpenAI output, and may need to train primarily on "low AI" background, unless special approaches are used to make sure AI noise doesn't transfer (e.g. using an entirely different architecture may work).

      Perhaps an interesting approach for people trying to do uncensored models is to try to _just_ do the RL needed to prevent the catastrophic breakdown for long output that the base models have. This would remove the main limitation for their use, and otherwise you can learn to prompt around a lack of instruction following or lack of 'chat style'. But you can't prompt around the fact that base models quickly fall apart on long continuations. Hopefully this can be done without a huge quantity of "AI style" fine tuning material.

    • rafram 4 months ago

      Has it actually not? Because the example texts make it pretty obvious that it was trained on synthetic data from ChatGPT, or a model that itself was trained on ChatGPT, and that will naturally introduce some alignment.

      • mapontosevenths 4 months ago

        Well...To be completely accurate it's better to say that it actually IS aligned, it's just aligned to be neutral and steerable.

        It IS based on synthetic training data using Atropos, and I imagine some of the source model leaks in as well. Although, when using it you don't seem to see as much of that as you did in Hermes 3.

      • sebastiennight 4 months ago

        I tried the same roleplaying prompt shared by GP in another (now deleted) comment and got a very similar completion from gpt-3.5-turbo.

        (While GPT-5 politely declined to play along and politely asked if I actually needed help with anything.)

        So, based on GP's own example I'd say the model is GPT-3.5 level?

marvin-hansen 4 months ago

Complete frustration to use. Yes it’s a bit more considerate, that claim is 100% true. They just didn’t mention that Hermes has zero ability to add context. Meaning, instead of uploading a relevant PDF or text file you either cop paste into the chat box or explain it in dialogue for the next 3 hours. Thought process takes forever. Complete waste of time.

joshcsimmons 4 months ago

This is the first web UI I've seen in years that isn't copypaste trash. Beautiful design and interaction elements here.

  • ewoodrich 4 months ago

    It took 8 seconds to fully load and then the tab locked up on my (admittedly low-RAM ) Chromebook...

    • joshcsimmons 4 months ago

      > It took 8 seconds to fully load and then the tab locked up on my (admittedly low-RAM ) Chromebook...

      ...and I can't play Cyberpunk 2077 on my macbook. Outside of sales/utilities (money,healthcare,etc.) I don't know where this notion of "having to develop for low specced machines" game from for web.

      • ewoodrich 4 months ago

        Well I'm not expecting to run the model, but being able to simply browse a website to learn about what they've released doesn't seem like a massive ask. I'm not talking about 512 megabytes here, it's a regular up-to-date supported device that can browse 99.99% of the modern web without any issues.

        It's pretty horrible performance even on my two year Windows laptop with 16GB of RAM, I could try on my M1 Macbook too but the juice just isn't worth the squeeze for me at this point.

  • jumploops 4 months ago

    Unfortunately the text rendering is terrible on my external monitor (looks ok on the MBP's retina screen).

  • airstrike 4 months ago

    Came here looking for this comment. One of the most aesthetically pleasing things I've seen in a decade.

  • soared 4 months ago

    They mention they’re working on a mobile UI.. but man using the current UI on mobile is horrible.

  • kevinqi 4 months ago

    really? it's pretty but I find it unreadable/unusable

ctoth 4 months ago

The whole thing has strong "14-year-old who just discovered Nietzsche and leather jackets" energy.

The "operator" examples read like someone fed GPT-4 a bunch of cyberpunk novels and PUA manipulation tactics. This is not how any of this works.

  • fancyfredbot 4 months ago

    Yeah it's kind of lacking in subtlety isn't it. I was slightly relishing how nuts it all was though. Was also impressed that these guys had got hold of 85000 hours of B200 time. Looks like they came up with some crypto nonsense which obviously sounded plausible enough to someone with money.

  • irusensei 4 months ago

    Nah it's good. I'm burned out of safemaxxed presentations approved by hr ethical department with corporate Memphis brochure showing purple noodle limbed people operating a laptop.

    • DetroitThrow 4 months ago

      I think that's pretty unfair to op to suggest the only dichotomy for these personas are middle schooler syndrome and corp speak HR Department.

      We can be critical of both for their respective shallowness.

  • Der_Einzige 4 months ago

    I have never met anyone who’s ever actually read Nietzsche’s books except hardcore philosophy majors.

    Any 14 year old who’s even opened up the first few pages and read them is way ahead of the average person complaining about nietzsche on the internet. You almost certainly would use radically incorrect terms to describe him, like calling him a “Nihilist”

esafak 4 months ago

Apparently based on Llama-3.1: https://portal.nousresearch.com/models

I'm told on their Discord the cut off date is December 2023.

  • baobabKoodaa 4 months ago

    Thank you! This information appears to have been intentionally downplayed.

    • diggan 4 months ago

      As long as it can do tool calling (which it seems to be doing OK with in the first ~30% of the context), the cut off date is less important. Maybe they didn't share it because it's less relevant today?

      • baobabKoodaa 4 months ago

        No, I wasn't referring to the cut-off date, I was referring to the fact that this is a fine-tune on top of an older Llama model. All the PR makes it sound like this is a foundational model (pretrained from scratch etc.).

whymauri 4 months ago

I really like their technical report:

https://arxiv.org/pdf/2508.18255

lyu07282 4 months ago

Great I always wanted a model trained on r/im14andthisisdeep and lesswrong polycule memes

pxc 4 months ago

It seems a lot of commenters have noted the boyishness or unprofessionalism of the stylistic and topical choices of the example prompts and responses. And I guess they are those things. But thanks to those choices, the page is also genuinely playful and fun. It even made me smile in a few places.

Maybe something equally playful of a different flavor would resonate better with critics. But the playfulness itself seems good to me.

HumanOstrich 4 months ago

Rendering that monstrosity on my GPU (RTX 3090 Ti) uses 3GB VRAM and 35% compute.

aidenn0 4 months ago

That landing page spins the fans up on my PC...

lawlessone 4 months ago

That page is causing havok in my browser

mempko 4 months ago

This model is very easy to steer. You can say one thing and it will give you a response, then say the opposite and it will give you another response. Not sure why this is useful for.

ryoshu 4 months ago

They are doing amazing work. Really fun models to use.

djoldman 4 months ago

From table 3 it appears that Deepseek R1 has the highest eval scores.

It's a 607B model vs 405B, so obviously "larger"

hildolfr 4 months ago

more models should include a "Can you run the shader on this page?" to vet participation.

that said : this page is unviewable on an intel N processor.

  • hollerith 4 months ago

    I was able to view the page with my Intel N100 box (using Google Chrome on Linux).

    • hildolfr 4 months ago

      I'm on a Windows N100 machine, 8gb ram, 1440p webview, lightweight. It runs just about anything else smoothly. It runs this page in an EndeavorOS partition in a vanilla Chrome fine.

      ...Which is opposite to most of my experiences, usually performance on this machine is reliant on very specific Intel windows drivers and it's a dog in linux.

      also for clarity : when I say unviewable I don't mean it's gibberish -- I mean that that if I keep trying to scroll through it the FPS/load is such that Windows insists on closing the frozen window. The text looks fine.

lern_too_spel 4 months ago

The charts are utter nonsense. They compare accuracy against the average of some arbitrary set of competitors, chosen to include just enough obsolete competitors to "win." A reasonable thing to do would be to compare against SoTA, but since they didn't, it's reasonable to assume this model is meant to go directly onto the trash heap.

  • jug 4 months ago

    The tech report compares against DeepSeek R1 671B, DeepSeek V3 671B, Qwen3 235B which have been regarded as SOTA class among ”open" models.

    I think this one holds its own surprisingly well in benchmarks for using the nowadays rather, let’s say battle tested Llama 3.1 base, a testament to its quality (Llama 3.2 & 3.3 didn’t employ new bases IIRC, only being new fine tunes, hence I think the explanation to why Hermes 4 is still based on 3.1… and of course Llama 4 never happened, right guys).

    However for real use, I wouldn’t bother with the 405B model? I think the age of the base is kind of showing in especially long contexts. It’s like throwing a load of compute on something that is kinda aged to begin with. You’d probably be better off with DeepSeek V3.1 or (my new favorite) GLM 4.5. The latter will perform significantly better than this with less parameters.

    The 70B one seems more sensible to me, if you want (yet another) decent unaligned model to have fun with for whatever reason.

    • BoorishBears 4 months ago

      You're seeming missing the release announcement does have a very ridiculous graph that their comment is right to call out:

      - For refusals they broke out each model's percentage.

      - For "% of Questions Correct by Category" they literally grouped an unnamed set of models, averaged out their scores, and combined them as "Other"...

      That's hilariously sketchy.

      It's also strange that the graph for "Questions Correct" includes creativity and writing. Those don't have correct answers, only win rates, and wouldn't really fit into the same graph.

  • whymauri 4 months ago

    The most direct, non-marketing, non-aesthetic summary is that this model trades off a few points on 'fundamental benchmarks' (GPQA, MATH/AIME, MMLU) in exchange for being a 'more steerable' (less refusals) scaffold for downstream tuning.

    Within that framing, I think it's easier to see where and how the model fits into the larger ecosystem. But, of course, the best benchmark will always be just using the model.

  • fancyfredbot 4 months ago

    The charts are probably there mostly to make them feel good about themselves.I don't feel like they care very much whether you use the model. Presumably they would like you to buy their token but they don't really seem to be trying very hard to push that either.

asumaran 4 months ago

that site is about to cook my 1050Ti

hinkley 4 months ago

I thought for sure this company was going to be based in Paris or Brussels. Maybe Quebec. Nope. NYC.

  • Telemakhos 4 months ago

    Were you thinking that "Nous" was French? It's the Greek word for the rational mind (as opposed to the animal appetites or the fighting spirit). Hermes is the Greek god of secret knowledge as well.

    • hinkley 4 months ago

      Huh. Not often I get a Greek word mistaken for a Latinate. Good to know.

  • derefr 4 months ago

    Oddly, I saw some B&W wheatpaste posters for the company put up in my neighbourhood in Vancouver. (Couldn’t even tell what the posters were advertising initially. Not even a QR code. Just “NOUS” and an anime girl.)

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection