-
Kubernetes K8s control loop declarative config container orchestration Kubernetes is just a reconciliation loop watching YAML files with extra steps
A Kubernetes controller is a loop: read the desired state from a YAML file, observe the actual state of the cluster, compute the diff, take action to converge. Every controller in the system runs this same loop independently.
while True: desired = read_spec("deployment.yaml") # what you asked for actual = observe_cluster() # what's running diff = desired - actual # what's wrong for action in plan(diff): execute(action) # fix it sleep(interval) -
Docker cgroups namespaces union filesystem Docker is just cgroups and namespaces with a nice CLI and extra steps
A container is a process with resource limits (cgroups) and an isolated view of the system (namespaces). Docker didn't invent any of this — it made it usable. The underlying kernel features have existed since 2008.
# What Docker does, the hard way unshare --mount --uts --ipc --net --pid --fork bash mount -t overlay overlay -o lowerdir=/base,upperdir=/diff,workdir=/work /merged echo $$ > /sys/fs/cgroup/memory/container1/cgroup.procs echo "512m" > /sys/fs/cgroup/memory/container1/memory.limit_in_bytes -
Dropbox rsync inotify cloud storage Dropbox is just rsync with a GUI and extra steps
Watch a folder for changes, sync the diffs to a remote server. Dropbox's genius was making that invisible — but the primitives are file-watching, delta transfer, and a storage backend. BrandonM was right about the parts, wrong about the product.
# Watch for changes, sync the diffs inotifywait -mr ~/Dropbox -e modify,create,delete | while read path action file; do rsync -avz ~/Dropbox/ remote:~/backup/ done -
Embeddings Vector Embeddings · Semantic Embeddings hash function nearest-neighbor search Embeddings are just hash functions that preserve similarity with extra steps
An embedding is a function that maps text to a fixed-size array of numbers. Similar text maps to nearby points. 'Semantic search' is computing this hash, then finding the nearest neighbors. It's a hash function that preserves meaning instead of uniqueness.
// "Embedding" — text in, number array out const vec = await embed("how do I reset my password"); // [0.021, -0.187, 0.441, ...] (1536 floats) // "Semantic search" — nearest neighbor lookup const results = await vectorDB.query(vec, { topK: 5 }); -
Guardrails AI Safety Filters · Content Filters · Output Guards input validation output sanitization regex / classifier filters Guardrails are just input validation and output sanitization with extra steps
Check the input before processing it. Check the output before returning it. Reject or transform anything that doesn't pass. It's the same pattern as web form validation and HTML sanitization — applied to natural language instead of HTML.
function guardrail(input: string, output: string) { if (containsPII(input)) throw new Error("PII detected in input"); if (isOffTopic(input)) throw new Error("Input out of scope"); if (containsHarmful(output)) return FALLBACK_RESPONSE; if (!matchesSchema(output)) return retry(input); return output; } -
Prompt Engineering Prompt Design · Prompt Crafting natural language markdown Prompt engineering is just writing clear instructions — no extra steps
A system prompt is a README for the model. 'Few-shot examples' are worked examples. 'Chain of thought' is asking someone to show their work. Prompt engineering is technical writing — the skills transfer directly, the mystification doesn't.
You are a code reviewer. When reviewing, check for: 1. Security: hardcoded secrets, injection vulnerabilities 2. Performance: O(n²) loops, unnecessary allocations Example: Input: `eval(user_input)` Output: "CRITICAL: arbitrary code execution via eval()" -
RAG Retrieval-Augmented Generation search index string concatenation RAG is just search + string concatenation with extra steps
Search your documents for relevant chunks. Concatenate them into the prompt. Call the LLM. That's RAG. The 'retrieval' is a search query, the 'augmentation' is string concatenation, and the 'generation' is the same LLM call you were already making.
const chunks = await searchIndex.query(userQuestion, { topK: 5 }); const context = chunks.map(c => c.text).join("\n\n"); const response = await llm.chat({ system: `Answer using this context:\n${context}`, messages: [{ role: "user", content: userQuestion }], }); -
Serverless FaaS · Functions as a Service · Lambda Functions process isolation event-driven invocation managed infrastructure Serverless is just someone else's server with process isolation and extra steps
There is no cloud, it's just someone else's computer. Serverless takes this one step further: it's someone else's process on someone else's computer. You upload a function, a trigger fires, your code runs in an isolated environment, you get billed by the millisecond.
# What "serverless" looks like from the platform's perspective def handle_request(event): container = pool.get_or_create("user-123-fn-abc") result = container.invoke(user_function, event) bill(user="user-123", duration_ms=container.last_duration) return result -
Webhooks HTTP Callbacks · Event Notifications · Push Notifications HTTP POST callback function Webhooks are just HTTP POST callbacks with extra steps
When something happens, send an HTTP POST to a URL someone registered. That's a webhook. 'Event-driven architecture' in SaaS is usually just one server POSTing JSON to another server's endpoint when state changes.
// The webhook sender — just a POST request await fetch(registeredUrl, { method: "POST", headers: { "Content-Type": "application/json" }, body: JSON.stringify({ event: "payment.succeeded", data: payment }), }); -
Memory key/value store file append system prompt injection Memory is just a key/value store with extra steps
The LLM can't remember anything between conversations — it's stateless. 'Memory' is just your application writing facts to a file (or database) and reading them back into the system prompt next time. The model isn't remembering. You are.
// "Remembering" something await fs.appendFile('memory.txt', `User prefers TypeScript\n`); // "Recalling" it next session const memories = await fs.readFile('memory.txt', 'utf-8'); const response = await llm.chat({ system: `What you know about this user:\n${memories}`, messages: [userMessage], }); -
Function Calling Tool Use · OpenAI Functions · Tool Calling JSON serialization function dispatch Function calling is just JSON serialization and function dispatch with extra steps
The LLM outputs JSON describing which function to call and with what arguments. You parse it and call the function. The API wraps this in structured types, but that's the whole thing.
// LLM returns: { name: "get_weather", arguments: { location: "NYC" } } const response = await llm.chat(messages, { tools }); if (response.tool_calls) { for (const call of response.tool_calls) { const fn = tools[call.name]; // look up the function const result = await fn(call.arguments); // call it messages.push(toolResult(call.id, result)); } } -
MCP Model Context Protocol JSON-RPC stdio MCP is just JSON-RPC over stdio with extra steps
MCP is JSON-RPC 2.0 over stdio. A tool call is a JSON-RPC request sent to a subprocess on stdin; the result comes back on stdout. Same pattern as LSP.
// Client → Server (stdin) {"jsonrpc":"2.0","method":"tools/call","params":{"name":"read_file","arguments":{"path":"/foo"}},"id":1} // Server → Client (stdout) {"jsonrpc":"2.0","result":{"content":[{"type":"text","text":"file contents..."}]},"id":1} -
Agents Agentic AI · AI Agents · Autonomous Agents while loop LLM call tool dispatch Agents are just while loops with an LLM as the transition function — with extra steps
An agent is a while loop. Each iteration: send messages to LLM, get back either a tool call or a final response. Execute the tool, append the result, repeat. Everything else is optimization.
messages = [system_prompt, user_message] while True: response = llm.chat(messages) if response.has_tool_calls(): for call in response.tool_calls: result = dispatch(call.name, call.arguments) messages.append(tool_result(call.id, result)) else: print(response.text) break -
Skills Gems · GPTs · Custom Instructions markdown YAML frontmatter prompt templates Skills are just markdown files with YAML frontmatter with extra steps
A skill is a markdown file with YAML frontmatter that gets appended to the system prompt. The LLM reads it like any other instruction. There's no runtime magic — it's string concatenation.
--- name: code-reviewer description: Reviews code for correctness and style --- When reviewing code, check for: 1. Off-by-one errors 2. Unhandled edge cases 3. Missing error handling Always explain *why* something is wrong, not just that it is.