Ask HN: What are you working on (March 2026)
I have a weekend project I am building for Home Assistant: https://getsmarthomefloorplan.com/
The general idea is to have a floorplan dashboard that is easy to set up. I was never good with SVG editors or 3D modeling, and wanted something simpler.
The goal is that it connects to devices you already have in Home Assistant and visualizes things like lights, sensors, and potential issues directly on the floorplan.
Still early days and very much a work in progress, but it’s free to use. Would love feedback.
https://talimio.com/ Generate fully personalized courses from a prompt. Fully interactive.
New features shipped last month:
- Adaptive practice: LLM generates and grades questions in real-time, then uses Item Response Theory (IRT) to estimate your ability and schedule the optimal next question. Replaces flashcards; especially for math and topics where each question needs to be fresh even when covering the same concept. - Interactive math graphs (JSXGraph) that are gradable - Single-image Docker deployment for easy self-hosting
Open source: https://github.com/SamDc73/Talimio
The IRT angle is interesting — most adaptive learning tools just do basic spaced repetition, but using Item Response Theory to estimate ability level in real-time is a much more honest approach to "personalized." The JSXGraph integration for gradable math graphs is a nice touch too, that's a hard problem. Quick question: how do you handle subjects where the "right answer" is more ambiguous? Does the LLM grading struggle with open-ended questions outside of math?
yeah we use an LLM for the grading .. (for the free form questions)
the flow is basically:
When practice questions are generated, the model generates the question + the reference answer together, but the user only sees the question. then on submit, a smaller model grades the learner answer against that reference answer + the grading criteria.
I benchmarked a bunch of judge models for this on a small multi-subject set, and `gpt-oss-20b` ended up being a very solid sweet spot for quality/speed/structured-output reliability. on one of the internal benchmarks it got ~98.3% accuracy over 60 grading cases, with ~1.6s p50 latency, so it feels fast enough to use live.
for math, it’s not just LLM grading though:
- `SymPy` for latex/math expressions, so if the learner writes an equivalent answer in a different form, it still gets marked correct; so `(x+2)(x+3)` and `x^2 + 5x + 6` can both pass. (but might remove that one since it might be easily replaced by an LLM? And it's a niche use that add some maintenance cost)
- tolerance-based checks for the JSXGraph board state stuff; so on the graph if you plotted x = 5.2 instead of 5.3 it will be within the margin of error to pass but will give you a message about it
I also tried embedding/similarity checking early on, but it was noticeably worse on tricky answers, so I didn’t use that as the main path.
I'm building Pithos (https://pithos.dev), a zero-dependency TypeScript utility ecosystem.
Five modules, one package: data utilities (Arkhe), schema validation (Kanon), Result/Option types (Zygos), typed error classes (Sphalma), and a Lodash migration bridge (Taphos).
The idea is that these patterns compose natively: Validate with Kanon, get a typed Result back via Zygos, chain transformations with Arkhe. One pipeline, no try/catch, full type inference.
Benchmarks: ~4x smaller and 5-11x faster than Zod 4, ~21x smaller than Lodash, ~3x smaller than Neverthrow.
Available on npm as @pithos/core
Calens – adds analytics and better editing to Google Calendar (calens.dev)
I track everything in my Google Calendar — work blocks, side projects, gym, social time. But I could never answer 'where did my time actually go this week?' Google Workspace has Time Insights, but it's locked to paid accounts and doesn't work for personal Google Calendar.
Calens fills that gap: GitHub-style heatmap showing 52 weeks of calendar activity, weekly/monthly time breakdowns by calendar or tag, a progress chart of planned vs completed time, and a cleaner in-page event editor. Everything runs on-device — no servers, no tracking, no data leaving the browser.
Early-stage, looking for people who already log their life in Google Calendar and want better data on their habits. Happy to give free lifetime access in exchange for honest feedback.
I am going to write an original in-memory database in JavaScript. I hate SQL and believe I can write something that executes faster than existing solutions while also feeling natural to JavaScript: storage and search via objects and arrays.
Interesting project. A few questions that came to mind: How do you handle GC pressure at scale? V8's hidden classes make homogeneous object arrays fast, but the per-object overhead adds up — 100K entries is already 6-8 MB of metadata alone, and major GC pauses become unpredictable. What's the persistence story? The moment you serialize to IndexedDB or OPFS, the "native structures" advantage disappears. Have you looked at columnar formats to keep it fast? How do you handle compound queries without a planner? Something like "age > 30 AND city = 'Paris' ORDER BY name" needs index selection strategy, otherwise you're back to full scans. The part I find most compelling is reactive queries — define a filter, then as objects land in the store (from DOM extraction, a WebSocket, whatever), results update incrementally via Proxy interception. No re-scan. That's not really a database, it's a live dataflow layer. Concrete example: a browser extension that extracts product data from whatever page you're on. Each page dumps heterogeneous objects into the store. A reactive query like "items where price < 50 and source contains 'amazon'" updates in real time as you browse. No server, no SQL, just JS objects flowing through live filters. That would be genuinely useful and hard to do well with existing tools.
I have not gotten far enough for that kind of load testing. I am working on this thing, but its still incomplete. My experience with GC related issues is that frequency of calls is more of a concern than size of the calls. So, I would have to monitor for memory spikes, which I can do from Node but not so much from the browser.
Quick question before going further: is this an exercise in language internals, or do you have a concrete use case in mind?
Asking because the answer changes the architecture significantly. If you're targeting live in-page data — extracting objects from the DOM as you browse, filtering them reactively — you may not need storage at all.
A Proxy-based observation layer gives you reactive queries without allocating anything new: the objects already exist in the tab's heap, you're just watching them mutate. No GC pressure, no persistence headaches, no query planner needed. That covers most of what you described: "items where price < 50 updates as you browse" is an event subscription with pattern matching, not a database problem.
The cases where you actually need storage — and therefore need to think about heap budgets, GC, serialization, query planning — are narrower:
Cross-session persistence (you want the data after the tab closes) Cross-tab aggregation (comparing prices across multiple open tabs simultaneously) Queries over historical data (not just what's on screen now, but what you saw across 20 pages of browsing)
Those are real storage problems.
But they're also the cases where you're competing with IndexedDB, OPFS, and SQLite WASM — and "I hate SQL" stops being enough of a reason to rebuild from scratch. What's the actual workflow you're trying to support?
It is mostly experimental, but there is a very tiny valid use case.
I have a strictly personal application at: https://github.com/prettydiff/aphorio
In that project I have various data artifacts stored in memory that I am constantly having to query in various ways:
* sockets per server
* servers and their sockets
* ports in use and by what
* docker containers and their active state
* various hardware and OS data lists
Currently all this data is just objects attached to a big object and all defined by TypeScript interfaces. I am storing the information just fine, but getting the information I need for a particular task and in task's format requires a variety of different logic and object definitions in the form of internal services.
Instead it would be nice to have information stores that contain all I need in the way a SQL database does with tables. Except I hate the SQL language, and its not as fast as you would think. Last I saw the fastest SQL based database is SQLite and its really only 3x faster than using primitive read/write streams to the file system. I can do must faster by not dicking around with language constructs.
My proposal is to store the database in something that vaguely resembles a database table but is really just JavaScript objects/arrays in memory as part of the application's current runtime and that can return artifacts in either object or array format. Query statements would be a format of JavaScript objects. I could have a table for server data, socket data, port data, and each record links back to records in other tables as necessary, kind of like SQL foreign keys. So in short: stores, functions to do all the work, and storage format that can be either objects or arrays and take both objects and arrays as queries.
The reason I want to store the data as both objects and arrays is a performance hack discovered by Paul Heckel in 1978. The stores would actually be a collection of objects against a unique primary key that can be reference as though it were an array.
Three real risks in your current approach before anything else. Shared references mutate silently — in JS, objects passed between your "tables" are aliases, not copies, so a mutation in one place propagates everywhere with no transaction and no rollback. No atomicity — Node is single-threaded but async I/O means two callbacks can interleave writes on the same structure with no guarantee a multi-step update lands cleanly. And everything disappears on crash — for socket/port/container state that's probably fine since it's observable from the system anyway, but you have no history.
That said, you may not need to leave your stack at all. V8's native Map is already a key-value store — O(1) reads, no overhead, typed in TypeScript. Your "tables" are just Maps and cross-referencing is composite string keys:
sockets.set(serverId:{serverId}: serverId:{socketId} , socketData). No library, no dependency, no SQL. This covers your use case as described.
If you want ACID transactions and persistence without SQL, look at lmdb-js — a Node binding on LMDB, the fastest embedded KV store in existence, zero-copy reads, used in production for 20 years. Your tables become named databases, your records are typed values, your cross-references are composite keys. Same mental model you're building, with 20 years of correctness guarantees underneath.
What's the actual reason for building from scratch rather than using native Map for the in-memory case?
Prompt injection detection library in Go, zero regex.
Most injection evasion works by making text look different to a scanner than to the LLM. Homoglyphs, leet speak, zero-width characters, base64 smuggling, ROT13, Unicode confusables — the LLM reads through all of it, but pattern matchers don't.
The project is two curated layers, not code:
Layer 1 — what attackers say. ~35 canonical intent phrases across 8 categories (override, extraction, jailbreak, delimiter, semantic worm, agent proxy, rendering...), multilingual, normalized.
Layer 2 — how they hide it. Curated tables of Unicode confusables, leet speak mappings, LLM-specific delimiters (<|system|>, [INST], <<SYS>>...), dangerous markup patterns. Each table is a maintained dataset that feeds a normalisation stage.
The engine itself is deliberately simple — a 10-stage normalisation pipeline that reduces evasion to canonical form, then strings.Contains + Levenshtein. Think ClamAV: the scan loop is trivial, the definitions are the product.
Long term I'd like both layers to become community-maintained — one curated corpus of injection intents and one of evasion techniques, consumable by any scanner regardless of language or engine.
Everything ships as go:embed JSON, hot-reloadable without rebuild. No regex (no ReDoS), no API calls, no ML in the loop. Single dependency (golang.org/x/text). Scans both inputs and LLM outputs.
result := injection.Scan(text, injection.DefaultIntents()) if result.Risk == "high" { ... }
Following up the comment i made last month, I'm a solo dev building a handful of apps across different niches.
- Linetris ( https://apps.apple.com/us/app/linetris-daily-line-puzzle/id6... ), a daily puzzle game where you fill an 8x8 grid with Tetris-like pieces to clear lines. Think Wordle meets Tetris. Daily challenges, leaderboards, and competititve play against friends.
- Kvile ( https://kvile.app ) — A lightweight desktop HTTP client built with Rust + Tauri. Native .http file support (JetBrains/VS Code/Kulala compatible), Monaco editor, JS pre/post scripts, SQLite-backed history. Sub-second startup. MIT licensed, no cloud, your requests stay on your machine. Think Postman without the bloat and login walls.
- Mockingjay ( https://apps.apple.com/us/app/mockingjay-secure-recorder/id6... ) — iOS app that records video and streams AES-256-GCM encrypted chunks to your Google Drive in real-time. By the time someone takes your phone, the footage is already safe in the cloud. Built for journalists, activists, and anyone who needs tamper-proof evidence. Features a duress PIN that wipes local keys while preserving cloud backups, and a fake sleep mode that makes the phone look powered off during recording.
- Stao ( https://stao.app ) — A simple sit/stand reminder for standing desk users. Runs in the system tray, tracks your streaks, zero setup. Available on macOS, Windows, Linux, iOS, and Android.
- MyVisualRoutine ( https://myvisualroutine.com ) — This one is personal. I have three kids, two with severe disabilities. Visual schedules (laminated cards, velcro boards) are a lifeline for non-verbal children, but they're a nightmare to manage and they don't leave the house. So I built an app that lets you create a full visual routine in about 20 seconds and take it anywhere. Choice boards, First/Then boards, day plans, 50+ preloaded activities, works fully offline. Free tier is genuinely usable. Available on iOS and Android.
- Biblewise — a Bible trivia game I originally built for my niece and nephew but ended up with three modes: adventure (progressive levels across 6 categories), daily challenges with streak tracking, and a timed mode. Built with SwiftUI + SwiftData, offline-first. https://apps.apple.com/us/app/biblewise-bible-quiz-game/id67...
- Neimr — a collaborative naming app with Tinder-style swiping. Create a survey for baby names, pet names, business names, etc., invite your partner/friends, and it finds which names you all agree on. Built with Flutter + Firebase. https://apps.apple.com/us/app/neimr-swipe-find-names/id67582...