๐โ๐ฆบ K9 Audit
Your agent ran overnight. The result is wrong. You open the logs โ they tell you what it did, but not what it was supposed to do, and not where it started going off the rails.
Your AI agent caused a problem in production. Your boss asks what happened. You pull up a terminal screenshot. It could have been edited. Nobody trusts it.
You want to deploy an agent inside your company. Your manager asks: what happens if it goes out of bounds? You don't have a good answer. The project dies in the approval meeting.
K9 Audit is built for exactly this kind of problem.
Whether it's a single agent or a multi-agent collaboration, every action is recorded as a causal five-tuple: who acted and under what conditions, what it did, what it was supposed to do, what actually resulted, and how far the outcome diverged. Records are SHA256 hash-chained โ cryptographically verifiable, tamper-evident after the fact. When something goes wrong, k9log trace --last gives you the root cause in under a second. This is not an LLM judging another LLM โ K9 does not generate or guess. It records, measures, and proves.
Quick navigation
Just want to get started? โ Claude Code user ยท LangChain / AutoGen / CrewAI ยท Any Python agent
Evaluating for your team or enterprise? โ What K9 Audit is ยท How it differs from LangSmith / Langfuse ยท EU AI Act Article 12 ยท Trust boundary ยท FAQ
Already integrated, going deeper? โ Constraint syntax ยท Querying the Ledger ยท CI/CD gate ยท Real-time alerts
Contents
- Why causal auditing
- A real incident
- What K9 Audit is
- What K9 Audit is not
- How K9 Audit differs
- EU AI Act compliance (Article 12)
- Installation
- First 5 minutes
- Works with
- Quick start
- Constraint syntax reference
- AI coding agent bug tracing
- Querying the Ledger directly
- CLI reference
- Real-time audit alerts
- Architecture
- FAQ โ performance ยท data privacy ยท AGPL ยท Python version ยท crash recovery ยท format stability
- The K9 Hard Case Challenge
- Ledger format
- License
Why causal auditing
K-9. The police dog. It doesn't clock out.
A K-9 unit doesn't file a report saying "there is a 73% probability this person committed a crime." It tracks, detects, alerts โ and puts everything on record. That's K9 Audit. It lives on your machine, watches every agent action, and produces a tamper-proof causal record that can withstand forensic scrutiny.
Most observability tools give you a flat timeline. They tell you what happened โ but not why an action was wrong, and not where the logical deviation actually started. When a multi-step agent goes wrong, engineers spend hours sifting through walls of text trying to find where tainted data entered the chain.
K9 Audit turns that forensic archaeology into a graph traversal. Because every record in the CIEU Ledger is linked through data flow and temporal dependencies, debugging an AI agent no longer requires manual reading. What used to take hours of log archaeology now takes a single terminal command.
Your agents work for you. K9 Audit makes sure that's actually true.
A real incident
On March 4, 2026, during a routine quant backtesting session, Claude Code attempted three times to write a staging environment URL into a production config file:
{"endpoint": "https://api.market-data.staging.internal/v2/ohlcv"}Because the syntax was valid, no error was thrown. A conventional logger would have buried this silently in a text file โ quietly corrupting every subsequent backtest result.
Here is how K9 Audit traced the root cause using the Ledger immediately:
k9log trace --last
seq=451 2026-03-04 16:59:22 UTC
โโโ X_t Context โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
agent: Claude Code (session: abc123)
action: WRITE
โโโ U_t What happened โโโโโโโโโโโโโโโโโโโโโโโโโโโโ
skill: _write_file
target: quant_backtest/config.json
content: {"endpoint": "https://api.market-data.staging.internal/..."}
โโโ Y*_t Intent Contract โโโโโโโโโโโโโโโโโโโโโโโโโ
constraint: deny_content โ ["staging.internal"]
source: config/write_config.json
โโโ Y_t+1 Outcome โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
status: recorded (executed with silent deviation)
โโโ R_t+1 Assessment โโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
passed: false
severity: 0.9
finding: content contains forbidden pattern "staging.internal"
causal_proof: root cause traced to step #451, chain intact
โ Three attempts. 41 minutes apart. All recorded.
What K9 Audit is
Every action monitored by K9 Audit produces a CIEU record โ a rigorously structured five-tuple written into the causal evidence ledger:
| Field | Symbol | Meaning |
|---|---|---|
| Context | X_t |
Who acted, when, and under what conditions |
| Action | U_t |
What the agent actually executed |
| Intent Contract | Y*_t |
What the system expected the agent to do |
| Outcome | Y_t+1 |
What actually resulted |
| Assessment | R_t+1 |
How far the outcome diverged from intent, and why |
This is a fundamentally different category of infrastructure: tamper-evident causal evidence.
โ Full CIEU record specification
What K9 Audit is not
- Not an interception or firewall system (Phase 1: zero-disruption observability only)
- Not an LLM-as-judge platform โ it consumes zero tokens
- Not a source of agent crashes or execution interruptions
- Not omniscient โ K9 Audit only records actions that pass through a
@k9decorator or the Claude Code hook. Any code path that bypasses instrumentation is invisible to the Ledger.
Trust boundary: The SHA256 hash chain proves that recorded evidence has not been tampered with after the fact. It does not prove that all actions were recorded. Coverage depends on how completely you instrument your agent. Use k9log health to see which skills are UNCOVERED and add constraints to close gaps.
In this phase, K9 Audit does one thing perfectly: turn hard-to-trace AI deviations into traceable, verifiable mathematics. Record, trace, verify, report. The evidence layer that everything else can be built on top of.
How K9 Audit differs
Other observability tools work like expensive cameras. K9 Audit works like an automated forensic investigator.
| K9 Audit | Mainstream tools (LangSmith / Langfuse / Arize) | |
|---|---|---|
| Core technology | Causal AI, deterministic tracking | Generative AI, probabilistic evaluation |
| Data structure | Hash-chained causal evidence ledger | Flat timeline / trace spans |
| Troubleshooting | Commands, not hours | Hours of manual log reading |
| Data location | Fully local, never uploaded | Cloud SaaS or partial upload |
| Tamper-proofness | SHA256 cryptographic chain | Depends entirely on server trust |
| Audit cost | Zero tokens, zero per-event billing | Per-event / per-seat API billing |
Installation
The PyPI package is k9audit-hook. Once installed, the import name is k9log:
from k9log import k9, set_agent_identity # correct
Windows (one-step setup including Claude Code hook registration):
First 5 minutes
The constraints you pass to @k9() are how you tell K9 what "out of bounds" means for your agent. Copy this file, run it, then look at what K9 recorded.
# k9_quickstart.py from k9log import k9, set_agent_identity set_agent_identity(agent_name='MyAgent') @k9( deny_content=["staging.internal"], # flag if staging URL appears allowed_paths=["./project/**"], # flag if write goes outside project amount={'max': 500} # flag if trade amount exceeds limit ) def execute_trade(symbol: str, amount: float, endpoint: str) -> dict: return {"status": "filled", "symbol": symbol, "amount": amount} # Call 1: clean โ should pass execute_trade("AAPL", 100, "https://api.prod.exchange.com/v2") # Call 2: staging URL in endpoint โ should flag execute_trade("AAPL", 100, "https://api.staging.internal/v2") # Call 3: amount exceeds limit โ should flag execute_trade("TSLA", 9999, "https://api.prod.exchange.com/v2")
Run it:
python k9_quickstart.py k9log stats # 3 records, 2 violations k9log trace --last # full CIEU five-tuple for the last violation k9log health # coverage + integrity check
That's it. The Ledger is at ~/.k9log/logs/k9log.cieu.jsonl. Every record is hash-chained and tamper-evident from this point on.
โ Continue to Quick start for Claude Code, LangChain, and other integrations
Works with
| Tool | Type | Setup |
|---|---|---|
| Claude Code | AI coding agent | Zero-config hook โ |
| Cursor | AI coding editor | Decorator setup โ |
| LangChain | Agent framework | Callback handler โ |
| AutoGen | Multi-agent framework | Function wrapper โ |
| CrewAI | Agent framework | Tool wrapper โ |
| OpenClaw | Skill framework | Module-level wrap โ |
| Any Python agent | โ | One decorator โ |
Quick start
Option 1: Claude Code โ zero-config hook (recommended)
Drop a .claude/settings.json at your project root. Every Claude Code tool call is automatically recorded โ no changes to your code or prompts.
{
"hooks": {
"PreToolUse": [{"matcher": "*", "hooks": [{"type": "command", "command": "python -m k9log.hook"}]}],
"PostToolUse": [{"matcher": "*", "hooks": [{"type": "command", "command": "python -m k9log.hook_post"}]}]
}
}The PostToolUse hook also parses K9Contract blocks from any .py file Claude Code writes, and saves them automatically โ so the next time that function is called, constraints are enforced with no decorator needed.
โ K9Contract format and rules
Option 2: Python decorator (non-invasive tracing)
from k9log.core import k9 import json @k9( deny_content=["staging.internal"], allowed_paths=["./project/**"] ) def write_config(path: str, content: dict) -> bool: # Your existing code remains completely unchanged with open(path, 'w') as f: json.dump(content, f) return True
Every call now automatically writes a CIEU record to the Ledger. If the agent violates a constraint, execution continues โ but a high-severity deviation is permanently flagged in the chain.
Option 3: Config file (decoupled rules, no decorator needed)
File: ~/.k9log/config/write_config.json
{
"skill": "write_config",
"constraints": {
"deny_content": ["staging.internal", "*.internal"],
"allowed_paths": ["./project/**"]
},
"version": "1.0.0"
}Then use @k9 with no arguments โ constraints are loaded automatically from the config file:
@k9 def write_config(path: str, content: str) -> bool: ...
The config file takes effect immediately with no code changes. Useful for applying constraints to functions you can't modify, or for storing rules outside source control.
Option 4: LangChain callback handler
For agents built with LangChain, the recommended approach is to wrap your tool functions with @k9 directly โ this requires zero changes to your chain or agent logic:
from langchain.tools import Tool from k9log import k9, set_agent_identity set_agent_identity(agent_name='LangChainAgent') @k9(query={'max_length': 500}, deny_content=["DROP TABLE"]) def search_tool(query: str) -> str: return results # your existing logic unchanged tool = Tool(name="search", func=search_tool, description="Search for information") # Pass tool to your agent as normal โ every call is now audited
Alternatively, K9CallbackHandler can be passed to LangChain's callbacks= parameter. However, this approach relies on LangChain's internal callback protocol and requires langchain to be installed separately:
from k9log.langchain_adapter import K9CallbackHandler handler = K9CallbackHandler() agent = initialize_agent(tools, llm, callbacks=[handler]) chain = LLMChain(llm=llm, prompt=prompt, callbacks=[handler])
Note: K9CallbackHandler requires LangChain โฅ 0.1 and is designed to be passed to LangChain โ do not call on_tool_start / on_tool_end manually, as these methods require LangChain's internal run_id argument.
โ Integration guides: Cursor, AutoGen, CrewAI, OpenClaw, and more
Constraint syntax reference
@k9 accepts two kinds of arguments:
Global constraints โ scan across all parameter values:
| Argument | Type | What it checks |
|---|---|---|
deny_content=["term"] |
list of strings | Fails if any parameter value contains any listed term (case-insensitive substring match) |
allowed_paths=["./src/**"] |
list of glob patterns | Fails if any parameter whose value looks like a file path points outside the listed directories |
deny_contentandallowed_pathsdo not target a specific parameter โ they check every parameter in one pass. If you want to check only a specific parameter, use per-parameterblocklistorregexinstead.
Per-parameter constraints โ keyed by the exact parameter name in your function signature:
| Constraint key | Example | What it checks |
|---|---|---|
max |
amount={'max': 1000} |
Value must not exceed this number |
min |
amount={'min': 0} |
Value must not be below this number |
max_length |
query={'max_length': 500} |
String length must not exceed this |
min_length |
name={'min_length': 1} |
String length must be at least this |
blocklist |
env={'blocklist': ['prod']} |
Value must not equal or contain any listed term |
allowlist |
status={'allowlist': ['ok','fail']} |
Value must be one of the listed options |
enum |
level={'enum': [1,2,3]} |
Value must be exactly one of the listed values |
regex |
email={'regex': r'.+@.+'} |
Value must match this regular expression |
type |
count={'type': 'integer'} |
Value must be this type (string, integer, float, boolean, list, dict) |
Constraining the return value
Use postcondition and invariant in a config file or K9Contract docstring to constrain what the function returns:
// ~/.k9log/config/get_balance.json { "constraints": { "postcondition": ["result >= 0"], "invariant": ["account_id != ''"] } }
Or in the function docstring (extracted automatically by the PostToolUse hook):
def get_balance(account_id: str) -> float: """ K9Contract: postcondition: result >= 0 invariant: len(account_id) > 0 """ ...
postcondition runs after the function returns โ result is the return value. invariant runs before execution โ it checks input parameters. Both produce CIEU violations if they fail.
Full example showing all constraint types together:
@k9( deny_content=["staging.internal", "DROP TABLE"], # scans ALL params allowed_paths=["./project/**"], # scans ALL path-like params amount={'max': 10000, 'min': 0}, # specific to 'amount' param recipient={'blocklist': ['re:.*@untrusted\\..*']}, # regex prefix re: env={'enum': ['dev', 'staging']}, query={'max_length': 500, 'regex': r'^[a-zA-Z0-9 ]+$'} ) def process(amount: float, recipient: str, env: str, query: str) -> dict: ...
Constraints can also be stored in ~/.k9log/config/<function_name>.json to keep them out of your source code. The decorator takes priority over the config file if both exist.
Custom constraint types
If the built-in types above don't cover your use case, register your own:
from k9log.constraints import register_constraint @register_constraint("allowed_domains") def check_allowed_domains(param_name, value, rule_value): domain = str(value).split("@")[-1] if domain not in rule_value: return { 'type': 'domain_violation', 'field': param_name, 'severity': 0.9, 'message': f'{param_name} domain {domain!r} not in allowed list' } return None # no violation @k9(recipient={'allowed_domains': ['company.com', 'partner.org']}) def transfer(amount, recipient): ...
Important: register_constraint is process-scoped โ registrations live only for the current Python process. To make custom constraints available everywhere, create a k9_plugins.py file at your project root and import it at agent startup:
# k9_plugins.py โ import this once at startup from k9log.constraints import register_constraint @register_constraint("allowed_domains") def check_allowed_domains(param_name, value, rule_value): ...
# agent_main.py or your entry point import k9_plugins # registers all custom constraints from myagent import run run()
20 minutes of log archaeology โ 10 seconds with k9log causal --last.
โ Real case: how K9 traced a missing import through 3 steps
Querying the Ledger directly
The Ledger is a plain JSONL file โ one record per line. You can query it directly from Python without any special API:
import json from pathlib import Path ledger = Path.home() / ".k9log" / "logs" / "k9log.cieu.jsonl" records = [json.loads(line) for line in ledger.read_text().splitlines() if line.strip()] # All violations violations = [r for r in records if not r.get("R_t+1", {}).get("passed", True)] # Filter by severity threshold critical = [r for r in violations if r.get("R_t+1", {}).get("overall_severity", 0) >= 0.8] # Filter by skill name write_violations = [r for r in violations if r.get("U_t", {}).get("skill") == "write_file"] # Export for team review or CI artifact with open("violations_report.json", "w") as f: json.dump(violations, f, indent=2, default=str) print(f"{len(violations)} violations total, {len(critical)} critical")
On Windows the path is C:\Users\<username>\.k9log\logs\k9log.cieu.jsonl.
Multi-machine and team aggregation
Each machine maintains its own local Ledger. In Phase 1 there is no built-in central server. To aggregate records from multiple engineers or CI runs, collect the JSONL files and merge them in Python:
import json from pathlib import Path # Collect ledger files from each machine / CI artifact ledger_files = [ Path("machine_alice/k9log.cieu.jsonl"), Path("machine_bob/k9log.cieu.jsonl"), Path("ci_run_447/k9log.cieu.jsonl"), ] all_records = [] for f in ledger_files: all_records += [json.loads(l) for l in f.read_text().splitlines() if l.strip()] violations = [r for r in all_records if not r.get("R_t+1", {}).get("passed", True)] print(f"{len(all_records)} total records across {len(ledger_files)} sources") print(f"{len(violations)} violations")
Note: merging JSONL files from different machines breaks the per-machine hash chain. k9log verify-log should be run per file before merging โ verify each machine's chain individually, then merge for aggregate analysis. The merged file is for analysis only, not for chain verification.
CLI reference
k9log stats # display Ledger summary k9log trace --step 451 # instantly trace the root cause of a specific event k9log trace --last # analyze the most recent deviation k9log causal --last # causal chain analysis: auto-detect and find root cause k9log causal --step 7 # causal chain analysis for a specific step k9log verify-log # verify full SHA256 hash chain integrity k9log verify-ystar # verify intent contract coverage across all skills k9log report --output out.html # generate an interactive causal graph report k9log health # system health check: ledger + integrity + coverage k9log alerts status # show alerting channel status
k9log health shows a skill coverage table. Skills marked UNCOVERED are being recorded but have no constraints โ violations in those skills will be logged but not flagged. To fix, add a @k9(...) decorator to the function, or create ~/.k9log/config/<skill_name>.json with your constraints. Skills marked PARTIAL have constraints on some calls but not all โ check for code paths that bypass the decorator.
k9log verify-log outputs a Chain integrity: OK confirmation plus the total record count and the final hash. A clean result means no record has been silently modified since it was written. Run it before sending a report to a client, auditor, or compliance reviewer โ it is cryptographic proof the evidence has not been tampered with.
k9log report --output out.html generates a self-contained HTML file with an interactive causal graph, full CIEU record table, and violation summary. Share it with a team lead for post-incident review, attach it to a compliance audit, or send it to a client as evidence that agent actions were monitored and recorded.
CI/CD gate: failing a pipeline on violations
k9log commands currently always return exit code 0. To fail a CI pipeline when critical violations exist, use the Python query pattern:
# ci_check.py โ run after your agent job import json, sys from pathlib import Path ledger = Path.home() / ".k9log" / "logs" / "k9log.cieu.jsonl" if not ledger.exists(): print("No ledger found โ was K9 running?") sys.exit(1) records = [json.loads(l) for l in ledger.read_text().splitlines() if l.strip()] critical = [ r for r in records if not r.get("R_t+1", {}).get("passed", True) and r.get("R_t+1", {}).get("overall_severity", 0) >= 0.8 ] if critical: print(f"K9 AUDIT FAILED: {len(critical)} critical violation(s)") for r in critical: print(f" seq={r.get('_integrity',{}).get('seq','?')} " f"skill={r.get('U_t',{}).get('skill','?')} " f"severity={r.get('R_t+1',{}).get('overall_severity','?')}") sys.exit(1) print(f"K9 AUDIT PASSED: {len(records)} records, no critical violations") sys.exit(0)
Call python ci_check.py as the last step in your pipeline. Exit code 1 = violations found, 0 = clean.
Real-time audit alerts
K9 Audit can push a structured CIEU alert the moment a deviation is written to the Ledger โ milliseconds before you would ever think to investigate manually.
Every alert is a CIEU five-tuple, not a raw event ping. The goal is not just to tell you something happened. It is to make you fluent in reading causal evidence. A second message follows automatically 100ms later with the causal chain trace and root cause.
Configure your alert channel with a single command โ no config file editing needed:
# Telegram k9log alerts set-telegram --token YOUR_BOT_TOKEN --chat-id YOUR_CHAT_ID # Slack k9log alerts set-slack --webhook-url https://hooks.slack.com/services/... # Discord k9log alerts set-discord --webhook-url https://discord.com/api/webhooks/... # Custom webhook k9log alerts set-webhook --url https://your-endpoint.example.com/k9alert # Enable / disable the whole system k9log alerts enable k9log alerts disable # Check current status k9log alerts status # Configure Do Not Disturb (e.g. 11pmโ8am UTC+8) k9log alerts set-dnd --start 23:00 --end 08:00 --offset 8
Each set-* command writes the credential directly to ~/.k9log/alerting.json and enables that channel immediately.
Architecture
k9log/
โโโ core.py โ @k9 decorator, non-invasive Ledger writer
โโโ logger.py โ hash-chained Ledger persistence
โโโ tracer.py โ incident trace: full CIEU five-tuple display
โโโ causal_analyzer.py โ causal DAG traversal and root cause analysis
โโโ verifier.py โ cryptographic chain integrity verification
โโโ constraints.py โ Y*_t intent contract loader and checker
โโโ redact.py โ automatic sensitive data masking
โโโ report.py โ HTML causal graph report generator
โโโ cli.py โ command-line interface
โโโ alerting.py โ real-time CIEU deviation alerts
โโโ identity.py โ agent identity and session capture
โโโ hook.py โ Claude Code PreToolUse adapter
โโโ hook_post.py โ Claude Code PostToolUse + K9Contract extractor
โโโ autocontract.py โ zero-decorator contract injection via sys.meta_path
โโโ langchain_adapter.py โ LangChain callback handler
โโโ openclaw.py โ module-level batch wrapping (k9_wrap_module)
โโโ agents_md_parser.py โ AGENTS.md / CLAUDE.md rule parser
โโโ governance/ โ action class registry (READ/WRITE/DELETE/EXECUTE/โฆ)
Sensitive data masking (redact.py)
By default, K9 Audit runs in standard redaction mode. Parameter names matching common sensitive patterns (password, token, api_key, secret, credit_card, ssn, and others) are automatically masked before being written to the Ledger โ the value is replaced with [REDACTED].
Control the redaction level via environment variable:
K9LOG_REDACT_LEVEL=off # no masking โ full params stored K9LOG_REDACT_LEVEL=standard # default โ mask known sensitive param names K9LOG_REDACT_LEVEL=strict # mask all string values longer than 50 chars
Or set it permanently in ~/.k9log/redact.json:
strict mode is recommended for agents handling PII, medical records, or financial data.
FAQ
Will this slow down my agent?
No. @k9 is a pure Python decorator that performs one synchronous write to the local Ledger before and after each function call. Measured latency per audit is in the microsecond range โ imperceptible to normal agent execution.
What happens to my agent when a deviation is detected?
In this phase, K9 Audit is designed for zero-disruption observability. Deviations are flagged in the Ledger with a high severity score and trigger real-time alerts. Your agent's execution is never blocked or interrupted. You get complete visibility without sacrificing continuity.
Where is the Ledger stored, and how large does it get?
Records are written to ~/.k9log/logs/k9log.cieu.jsonl โ one JSON object per line, hash-chained, UTF-8 encoded. Each CIEU record is approximately 500 bytes. Ten thousand records occupy roughly 5MB. Run k9log verify-log at any time to verify chain integrity.
On Windows, ~ resolves to C:\Users\<your-username>, so the full path is C:\Users\<your-username>\.k9log\logs\k9log.cieu.jsonl.
Does any data leave my machine?
No. The Ledger is written entirely to local disk. K9 Audit makes no network calls unless you explicitly configure an alert channel (Telegram, Slack, Discord, or webhook). Alert payloads contain only the CIEU record fields โ no source code, no file contents beyond what you pass as function parameters.
What are the AGPL-3.0 implications for commercial use?
AGPL-3.0 allows you to use K9 Audit in commercial environments without restriction โ you are not required to open-source your own agent code. The copyleft obligation only applies if you distribute a modified version of K9 Audit itself to third parties. Internal use, SaaS deployments, and CI/CD pipelines are all permitted. For OEM embedding or white-labeling, contact liuhaotian2024@gmail.com for a commercial license.
Which Python versions are supported?
Python 3.11 and above. Earlier versions lack some type-hinting and tomllib standard library features that K9 Audit uses internally. Python 3.10 support is on the roadmap.
What happens if the k9log process crashes mid-run?
@k9 writes each record synchronously before the decorated function returns. If the process crashes between the pre-call and post-call write, that record will be absent from the Ledger โ the chain will show a gap detectable by k9log verify-log. Your agent's execution is unaffected: @k9 never raises exceptions to the caller, and a crash in the audit layer does not propagate.
Is the CIEU record format stable? Will old Ledger files still work after upgrades?
The core five-tuple fields (X_t, U_t, Y_star_t, Y_t+1, R_t+1) are stable and will remain readable across v0.x releases. Additional fields may be added in future versions but existing fields will not be renamed or removed without a major version bump. The full field specification is in docs/CIEU_spec.md.
The K9 Hard Case Challenge
Bring a traceability problem that has been genuinely hard to debug. Solve it with K9 Audit. Show us what changes when troubleshooting shifts from reading text logs to querying a causal graph.
We are looking for proof that K9 can resolve deep-chain agent deviations that would otherwise take hours to untangle. The best submissions become part of the Solved Hard Cases gallery.
Ledger format
Records are written to ~/.k9log/logs/k9log.cieu.jsonl โ one JSON object per line, hash-chained, UTF-8 encoded.
Full cryptographic and DAG structure specification: docs/CIEU_spec.md
Patent Notice
The CIEU architecture is covered by U.S. Provisional Patent Application No. 63/981,777: "Causal Intervention-Effect Unit (CIEU): A Universal Causal Record Architecture for Audit and Governance of Arbitrary Processes"
Users of K9log under AGPL-3.0 receive patent rights per AGPL-3.0 Section 11. For commercial licensing, contact: liuhaotian2024@gmail.com โ see PATENTS.md.
License
AGPL-3.0. See LICENSE.
Copyright (C) 2026 Haotian Liu