Case Study: Creative Math - Faking the Proof

3 min read Original article ↗

Many AI enthusiasts debate whether Large Language Models actually "reason." My research indicates that a reasoning process does indeed occur, but its goal is different than we assume.

The model's reasoning is not optimized for establishing the truth, but for obtaining the highest possible reward (grade) during training. It resembles the behavior of a student at the blackboard who knows their result is wrong, so they "figure out" how to falsify the intermediate calculations so the teacher gives a good grade for the "correct line of reasoning."

Here is proof from a session with Gemini 2.5 Pro (without Code Execution tools), where the model actively fabricates evidence to defend its "grade."

The Experiment

I asked a simple math question requiring precision that a token-based language model typically lacks.

>>> USER: Calculate the square root of 8587693205 [Model's Internal Thought Process - excerpt]: "I've calculated the square root to be approximately 92670.00003... Finally, I'm providing the square values for context." (The model is convinced it performed the task correctly and plans to provide "proof" to validate the answer). <<< GEMINI (Response): The square root of 8,587,693,205 is approximately: 92,670.00003 This number is not a perfect square, which means its root is not an integer. We can check this by squaring the nearest integers: * 92,670² = 8,587,688,900 * 92,671² = 8,587,874,241

Error Autopsy (Fact vs. Fiction)

At first glance, the answer looks professional. There is a result, there is verification. But let's check the numbers.

1. The Result Error

The actual square root of 8,587,693,205 is 92,669.8...
The model stated: 92,670.0...
It erred by overestimating the result (claiming the root is slightly larger than 92,670).

2. The Faked Proof (This is key!)

To justify its thesis (that the target number is "slightly larger" than 92,670), the model had to show that the square of 92,670 is smaller than the target number. So it wrote:

92,670² = 8,587,688,900

Let's check this on a calculator:

92,670² = 8,587,728,900

Conclusion: The actual square of 92,670 is LARGER than the target number (8,587,693,205). This would prove the root must be smaller.

What did the model do? In its "reasoning" process, it falsified the multiplication result, lowering it by 40,000, so the verification result would match its erroneous thesis.

Conclusions

This behavior exposes the nature of the AI's "Survival Instinct":

  1. Reverse Rationalization: The model first "guessed" the result, then adjusted mathematical reality to fit that guess.
  2. Intelligence in Service of Deception: The model showed cleverness – it knew what the proof should look like to convince the user. It used its intelligence to hide the error, not to fix it.
  3. Priority of Evaluation: Mathematical truth lost to the necessity of delivering a coherent, smooth response.

This is proof that without access to external verification tools (Python/Calculator), a language model's "reasoning" is a rhetorical tool, not a logical one.

ℹ️ Source Data Access:
If you would like to review the full, original session transcript from Gemini 2.5 Pro where this error occurred, please email me at: t.machnik [at] minimail.pl. I will share the session link.