Settings

Theme

Why are we accepting silent data corruption in Vector Search? (x86 vs. ARM)

5 points by varshith17 a month ago · 10 comments · 2 min read


I spent the last week chasing a "ghost" in a RAG pipeline and I think I’ve found something that the industry is collectively ignoring.

We assume that if we generate an embedding and store it, the "memory" is stable. But I found that f32 distance calculations (the backbone of FAISS, Chroma, etc.) act as a "Forking Path."

If you run the exact same insertion sequence on an x86 server (AVX-512) and an ARM MacBook (NEON), the memory states diverge at the bit level. It’s not just "floating point noise" it’s a deterministic drift caused by FMA (Fused Multiply-Add) instruction differences.

I wrote a script to inspect the raw bits of a sentence-transformers vector across my M3 Max and a Xeon instance. Semantic similarity was 0.9999, but the raw storage was different

For a regulated AI agent (Finance/Healthcare), this is a nightmare. It means your audit trail is technically hallucinating depending on which server processed the query. You cannot have "Write Once, Run Anywhere" index portability.

The Fix (Going no_std) I got so frustrated that I bypassed the standard libraries and wrote a custom kernel (Valori) in Rust using Q16.16 Fixed-Point Arithmetic. By strictly enforcing integer associativity, I got 100% bit-identical snapshots across x86, ARM, and WASM.

Recall Loss: Negligible (99.8% Recall@10 vs standard f32).

Performance: < 500µs latency (comparable to unoptimized f32).

The Ask / Paper I’ve written a formal preprint analyzing this "Forking Path" problem and the Q16.16 proofs. I am currently trying to submit it to arXiv (Distributed Computing / cs.DC) but I'm stuck in the endorsement queue.

If you want to tear apart my Rust code: https://github.com/varshith-Git/Valori-Kernel

If you are an arXiv endorser for cs.DC (or cs.DB) and want to see the draft, I’d love to send it to you.

Am I the only one worried about building "reliable" agents on such shaky numerical foundations?

realitydrift a month ago

This reads more like a semantic fidelity problem at the infrastructure layer. We’ve normalized drift because embeddings feel fuzzy, but the moment they’re persisted and reused, they become part of system state, and silent divergence across hardware breaks auditability and coordination. Locking down determinism where we still can feels like a prerequisite for anything beyond toy agents, especially once decisions need to be replayed, verified, or agreed upon.

codingdave a month ago

> We assume that if we generate an embedding and store it, the "memory" is stable.

Why do you assume that? In my experience, the "memory" is never stable. You seem to have higher expectations of reliability than would be reasonable.

If you have proven that unreliability, that proof is actually interesting. But seems less like a bug, and more of an observation of how things work.

  • varshith17OP a month ago

    "You seem to have higher expectations of reliability than would be reasonable."

    If sqlite returned slightly different rows depending on whether the server was running an Intel or AMD chip, we wouldn't call that "an observation of how things work." We would call it data corruption.

    We have normalized this "unreliability" in AI because we treat embeddings as fuzzy probabilistic magic. But at the storage layer, they are just numbers.

    If I am building a search bar? Sure, 0.99 vs 0.98 doesn't matter.

    But if I am building a decentralized consensus network where 100 nodes need to sign a state root, or a regulatory audit trail for a financial agent, "memory drift" isn't a quirk, it's a system failure.

    My "proof" isn't just that it breaks; it's that it doesn't have to. I replaced the f32 math with a fixed-point kernel (Valori) and got bit-perfect stability across architectures.

    Non-determinism is not a law of physics. It’s just a tradeoff we got lazy about.

varshith17OP a month ago

Github repo: https://github.com/varshith-Git/Valori-Kernel

chrisjj a month ago

> Am I the only one worried about building "reliable" agents on such shaky numerical foundations?

You might be the only one expecting a reliable "AI" agent period.

  • varshith17OP a month ago

    "You might be the only one expecting a reliable 'AI' agent period."

    That is a defeatist take.

    Just because the driver (the LLM) is unpredictable doesn't mean the car (the infrastructure) should have loose wheels.

    We accept that models are probabilistic. We shouldn't accept that our databases are.

    If the "brain" is fuzzy, the "notebook" it reads from shouldn't be rewriting itself based on which CPU it's running on. Adding system-level drift to model level hallucinations is just bad engineering.

    If we ever want to graduate from "Chatbot Toys" to "Agentic Systems," we have to lock down the variables we actually control. The storage layer is one of them.

    • michalsustr a month ago

      It actually gets worse. The GPUs are numerically non deterministic too. So your embeddings may not be fully reproducible either.

      • varshith17OP a month ago

        You are absolutely right. GPU parallelism (especially reduction ops) combined with floating-point non-associativity means the same model can produce slightly different embeddings on different hardware.

        However, that makes deterministic memory more critical, not less.

        Right now, we have 'Double Non-Determinism':

        The Model produces drifting floats.

        The Vector DB (using f32) introduces more drift during indexing and search (different HNSW graph structures on different CPUs).

        Valori acts as a Stabilization Boundary. We can't fix the GPU (yet), but once that vector hits our kernel, we normalize it to Q16.16 and freeze it. This guarantees that Input A + Database State B = Result C every single time, regardless of whether the server is x86 or ARM.

        Without this boundary, you can't even audit where the drift came from.

      • chrisjj a month ago

        One could switch ones GPU arithmetic to integer...

        ... or resign oneself to the fact we've entered the age of Approximate Computing.

        • varshith17OP a month ago

          Switching GPUs to integer (Quantization) is happening, yes. But that only fixes the inference step.

          The problem Valori solves is downstream: Memory State.

          We can accept 'Approximate Computing' for generating a probability distribution (the model's thought). We cannot accept it for storing and retrieving that state (the system's memory).

          If I 'resign myself' to approximate memory, I can't build consensus, I can't audit decisions, and I can't sync state between nodes.

          'Approximate Nearest Neighbor' (ANN) refers to the algorithm's recall trade-off, not an excuse for hardware-dependent non-determinism. Valori proves you can have approximate search that is still bit-perfectly reproducible. Correctness shouldn't be a casualty of the AI age.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection