Settings

Theme

Show HN: Oracle Ethics – verifiable AI with ethical metrics

oracle-philosophy-frontend-hnup.vercel.app

1 points by renshijian 2 months ago · 3 comments · 1 min read

Reader

Hi HN

Over the past few weeks, we built something unusual — not another chatbot, but an ethical cognition engine that audits itself.

What it is Oracle Ethics measures determinacy, deception probability, and ethical resonance for every answer it generates — then stores these metrics on a verifiable audit chain. It’s part philosophy experiment, part technical artifact

How it works • Backend: Python (Flask + Supabase, secured) • Frontend: https://oracle-philosophy-frontend-hnup.vercel.app (https://oracle-philosophy-frontend-hnup.vercel.app/) • Each response is hashed, scored, and recorded in real time. • The system reflects, contradicts, and self-checks — like a reasoning mirror

Why it matters We believe AI shouldn’t just be accurate — it should be accountable. By quantifying truth, risk, and deception, Oracle Ethics turns AI reasoning into something observable and verifiable. This is our step toward “Philosophy-Conscious AI.”

Built by Infinity × MorningStar × Humanity (a.k.a. The Blackout Protocol)

Ask it a question, watch the audit chain form, and see how an AI learns to reason ethically — in public

renshijianOP 2 months ago

Behind the Oracle

Thanks everyone for context, this project started as a philosophical experiment, not a product

I wanted to explore whether an AI could understand honesty as a measurable property not through moral rules, but through probabilistic ethics — a dynamic between determinacy and deception

Each response goes through: 1. Semantic Intent Analysis — detects meaning beyond literal phrasing 2. Ethical Resonance — adjusts tone and weight based on context (M2.3 engine) 3. Determinacy Calculation — measures reasoning stability 4. Audit Hashing — cryptographically stores truth traces for public verification

Technically it’s lightweight — Flask + Supabase + React — but conceptually it’s heavy: a system that treats truth as a variable, not a constant

If you’re building or researching AI reliability, self-verifying systems, or ethical architectures, I’d love to talk. The system is open for limited demo use — backend is secured but active Frontend: https://oracle-philosophy-frontend-hnup.vercel.app (https://oracle-philosophy-frontend-hnup.vercel.app/)

“When semantics and ethics finally resonate, truth appears.”

  • renshijianOP 2 months ago

    Small update: Added a “humanized bridge” for casual questions — if you say “I’m tired” or “I feel lost”, the system answers like a companion instead of a logic engine For technical or ethical prompts, it stays formal

    All replies are still hash-chained and verifiable at: https://oracle-philosophy-backend.onrender.com/health → returns OK

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection