Settings

Theme

Claude Opus 4.6 accuracy on BridgeBench hallucination test drops from 83% to 68%

twitter.com

70 points by bratao 23 days ago · 20 comments

Reader

Reubend 23 days ago

Because the website doesn't seem to show any sample size of runs, I assume they ran it once across the suite.

The models are nondeterministic, and therefore it's pretty normal for different runs to give different results.

I don't see this as evidence that Opus 4.6 has gotten worse.

  • bsder 23 days ago

    > The models are nondeterministic, and therefore it's pretty normal for different runs to give different results.

    And how is that an excuse?

    I don't care about how good a model could be. I care about how good a model was on my run.

    Consequently, my opinion on a model is going to be based around its worst performance, not its best.

    As such, this qualifies as strong evidence that Opus 4.6 has gotten worse.

    • senko 23 days ago

      >> The models are nondeterministic, and therefore it's pretty normal for different runs to give different results.

      > And how is that an excuse? […] this qualifies as strong evidence…

      This qualifies as nothing due to how random processes work, that’s what the gp is saying. The numbers are not reliable if it’s just one run.

      If this is counter-intuitive, a refresher on basic statistics and probability theory may be in order.

      • bsder 22 days ago

        > If this is counter-intuitive, a refresher on basic statistics and probability theory may be in order.

        I'm not running "statistics". I'm running an individual run. I care about the individual quality of my run and not the general quality of the "aggregate".

        The problem here is that the difference may not be immediately observable. Sure, if it doesn't give a correct answer, that's quickly catchable. If it costs me 10x the time, that's not immediately catchable but no less problematic.

    • jmalicki 23 days ago

      No, what they're saying is the previous run could have just been lucky and not representative!

  • slurpyb 23 days ago

    I would love to know what you’re doing in the harness to not feel the total degradation in experience now in comparison to December & January.

  • coldtea 23 days ago

    >I don't see this as evidence that Opus 4.6 has gotten worse.

    I see it as corroboration evidence of actual everyday experience.

    Also, any reason to imply "BridgeBench", apparently dedicated to AI benchmarking, wouldn't have run it more than once across the suite?

    • Reubend 22 days ago

      > Also, any reason to imply "BridgeBench", apparently dedicated to AI benchmarking, wouldn't have run it more than once across the suite?

      They didn't list a sample size of runs, didn't show any numbers for variance across runs, etc...

      So while they may have done that behind the scenes and just not told us, this doesn't seem like a rigorous analysis to me. It seems to me like people just want to find data that support the conclusion they already decided on (which is that Opus got worse).

  • dlahoda 23 days ago

    are models really non deterministic?

    • Rury 23 days ago

      People are describing the results when they say models are non-deterministic. Give it the same exact input twice, and you'll get two different outputs. Deterministic would mean the same input always gives the same output.

    • loneboat 23 days ago

      Yes. Look up LLM "temperature" - it's an internal parameter that tweaks how deterministic they behave.

      • csomar 23 days ago

        The models are deterministic, the inference is not.

        • coldtea 23 days ago

          Which is a useless distinction. When we say models in this context we mean the whole LLM + infrastructure to serve it (including caches, etc).

        • jmalicki 23 days ago

          What does that even mean?

          Even then, depending on the specific implementation, associativity of floating point could be an issue between batch sizes, between exactly how KV cache is implemented, etc.

          • csomar 23 days ago

            That's still an inference time issue. If you have perfect inference with a zero temperature, the models are deterministic. There is no intrinsic randomness in software-only computing.

            • jmalicki 23 days ago

              Floating point associativity differences can lead to non-determinism with 0 temperature if the order of operations are non-deterministic.

              Anyone with reasonable experience with GPU computation who pays attention knows that even randomness in warp completion times can easy lead to non-determinism due to associativity differences.

              For instance: https://www.twosigma.com/articles/a-workaround-for-non-deter...

              It is very well known that CUDA isn't strongly deterministic due to these factors among practitioners.

              Differences in batch sizes of inference compound these issues.

              Edit: to be more specific, the non-determinism mostly comes from map-reduce style operations, where the map is deterministic, but the order that items are sent to the reduce steps (or how elements are arranged in the tree for a tree reduce) can be non-deterministic.

spacebacon 23 days ago

Computational semiotics has been empirically proven. Model releasing soon. In the mean time, for the love of god someone recognize this and help blow these numbers out of the water.

https://open.substack.com/pub/sublius/p/the-semiotic-reflexi...

ehtbanton 23 days ago

Benchmarks like this one are designed to thoroughly test the model across several iterations. 15% is a MASSIVE discrepancy.

Come on Anthropic, admit what you're doing already and let us access your best models unhindered, even if it costs us more. At the moment we just all feel short-changed.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection