Settings

Theme

ShowHADS – A convention for writing technical docs that AI reads efficiently

github.com

2 points by catcam a month ago · 7 comments

Reader

catcamOP a month ago

Show HN: HADS – A convention for writing technical docs that AI reads efficiently

https://github.com/catcam/hads

AI models increasingly read documentation before humans do. But docs are written for humans — verbose, contextual, narrative. This creates token waste and increases hallucination risk, especially on smaller/local models.

HADS is not a new format. It's a tagging convention on top of standard Markdown:

  [SPEC]  — authoritative facts, terse, bullet/table/code
  [NOTE]  — human context, history, examples
  [BUG]   — verified failure + fix (symptom, cause, fix)
  [?]     — unverified/inferred, lower confidence
Every document starts with an AI manifest — a short paragraph that tells the model what to read and what to skip. This is the core idea: explicit instructions in the document itself, not in the prompt.

A 7B local model with limited context window can read a HADS document and extract facts correctly because it doesn't have to reason about structure — the document tells it how.

The repo includes: - Full specification (SPEC.md) - Three example documents (REST API, binary file format, config system) - Python validator (exit codes for CI/CD) - Claude skill (SKILL.md) for AI-assisted doc generation

All MIT. Feedback welcome — especially from people running local models.

  • Damir159 a month ago

    What a clever and pragmatic idea that addresses a real and growing need

    • catcamOP a month ago

      Thanks! To put some numbers behind the motivation:

        Rough Fermi estimate of global impact if this became standard practice:
      
        Assumptions:
        - ~1B AI queries/day touch technical docs (Copilot, ChatGPT, enterprise agents, CI/CD pipelines)
        - HADS reduces tokens read per query from ~5k to ~1.5k via manifest + targeted block reading (70% reduction)
        - ~0.003 Wh per 1k tokens for GPT-4 class inference (per published ML efficiency research)
        - ~2 min saved per query from more precise first answers
      
        Results:
        - ~3.8 TWh electricity saved/year (~350k US homes, or ~20% of Google's annual data center consumption)
        - ~12B developer hours recovered/year
        - ~$60B/year in productivity at 10% adoption
      
        These are intentionally rough — could be 10x off in either direction. The point is that documentation structure is
        economically material at scale, not just an ergonomics concern.
      
        Full writeup with methodology:
        https://medium.com/@catcam_46604/ai-is-now-the-primary-reader-of-your-docs-nobody-told-your-docs-5f7103ea3281
mormy_cro a month ago

Nice man, will try to implement to my newest project :)

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection