Settings

Theme

Show HN: I block LLM hallucinations with a cognitive math framework(re!Think it)

github.com

2 points by Real_Egor 11 days ago · 0 comments · 2 min read

Reader

I wrote this prompt intuitively. This is just how I think — formulas, branches, hard conditions. I simply tried to explain to the model how to think, in the same language I use myself. The same language I always used to design database architectures and business logic for clients. What used to be "data flows" and "function nodes" became "input flows" and "decision nodes." And it worked like a charm.

Cognitive frameworks and semantic programming — this is what you can already do with LLMs today. Industrial scale? Not yet, and I know that clearly — 20 years of deploying IT systems for businesses gives you a sober eye. But I have zero doubts these contracts actually work — anyone can drop this prompt into any model and ask it directly how viable this idea is. I keep building more of them — for attention drift, for auditing business contracts.

We will never reach AGI by building crutches around the model.

The Essay = links to repo.

Inside: the protocol in three languages — Russian, English, Chinese — each in a full version (for studying the mechanics) and a compact version (for use with smaller models).

There are also two easter eggs: pseudocode and Lojban. I started both as token-reduction experiments. What I got instead was resistance to semantic drift. Didn't see that coming.

P.S. Coming soon: a surprisingly elegant fix for the endless model disclaimers and apologies. Turns out you can redirect that energy instead of fighting it.

No comments yet.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection