300 Founders, 3M LOC, 0 engineers. Here's our workflow
My co-founder Tyler Brown and I have been building our product for 6 months. The co-working space that Tyler founded that we work out of houses 300 founders that we've gleaned agentic coding tips and tricks from.
Neither of us came from traditional SWE backgrounds. Tyler was a film production major. I did informatics. Our codebase is a 300k line Next.js monorepo and at any given time we have 3-6 AI coding agents running in parallel across git worktrees.
Every feature follows the same four-phase pipeline, enforced with custom Claude Code slash commands:
1. /discussion - have an actual back-and-forth with the agent about the codebase. Spawns specialized subagents (codebase-explorer, pattern-finder) to map the territory. No suggestions, no critiques, just: what exists, where it lives, how it works. This is the rabbit hole loop. Each answer generates new questions until you actually understand what you're building on top of.
2. /plan - creates a structured plan with codebase analysis, external research, pseudocode, file references, task list. Then a plan-reviewer subagent auto-reviews it in a loop until suggestions become redundant. Rules: no backwards compatibility layers, no aspirations (only instructions), no open questions. We score every plan 1-10 for one-pass implementation confidence.
3. /implement - breaks the plan into parallelizable chunks, spawns implementer subagents. After initial implementation, Codex runs as a subagent inside Claude Code in a loop with 'codex review --branch main' until there are no bugs. Two models reviewing each other catches what self-review misses.
4. Human review. Single responsibility, proper scoping, no anti-patterns. Refactor commands score code against our actual codebase patterns (target: 9.8/10). If something's wrong, go back to /discussion, not /implement. Helps us find "hot spots", code smells, and general refactor opportunities.
The biggest lesson: the fix for bad AI-generated code is almost never "try implementing again." It's "we didn't understand something well enough." Go back to the discussion phase.
Wrote up the full workflow with details on the death loop, PR criteria, and tooling on my personal blog, can share if folks are interested.
All Claude Code commands and agents are open source: https://github.com/Dcouple-Inc/Pane/tree/main/.claude/commands
I built Pane using this workflow over the last month or so and started using it every day a little over a week ago. Each workspace gets its own worktree and session. Takes me less than 30s to go from idea -> discussion now. The repo link above takes you to it.
On a good day I merge 6-8 PRs. Happy to answer questions about the workflow, costs, or tooling for this volume of development. This is a cool set of games! Since you're looking for feedback to see if people enjoy the idea, I ran your landing page through an AI tool I built that 'roasts' websites for conversion and clarity. It might help you spot why people might bounce before trying the daily games. Would you be open to seeing the report? It could help you get more of those free account signups you mentioned. https://websiteroasters.com/ just check it out :) and if you also think you need help on do it just tell me we can talk! Here's a more in depth writeup on the workflow: https://www.runpane.com/blog/ai-native-development-workflow Oh, arguably the best part (which I forgot to mention) is that our terminals continue running in the cloud, so dev work isn't blocked by our computers going to sleep.