Structured methodology for AI-assisted development

8 min read Original article ↗

Structured methodology for AI-assisted development

Have you ever caught yourself typing the exact same prompt for the third time this week?

You know the scenario. You open your AI chat, carefully reconstruct that perfect prompt where Claude acts like a senior architect, dump in the same context, explain the same requirements... and you realize you're essentially copy-pasting your brain every single session.

KISS. Start with the basics.

Reuse prompts, introducing tasks

Let's start with the most basic problem: you keep asking your AI to do the same types of tasks. Code reviews, implementation planning, debugging sessions. Same structure, different content.

The first step? Stop rewriting everything from scratch, start by creating a simple tasks folder with markdown files:

Each file became my template: create-implementation-plan-task.md

Suddenly, instead of reconstructing my prompts from memory, I'd just reference the file. Copy, paste, customize. Simple, but it worked.

Adding context, enter the Persona Files

The task templates solved part of the problem, but I noticed something else. My AI interactions got drastically better when I gave the model a clear personality and expertise area.

"Act like a senior software engineer" worked, but "Act like Sarah, a senior backend engineer with 8 years of experience in distributed systems, known for thorough code reviews and emphasis on performance optimization" worked way better. So I created a personas folder:

dev.yml:

Now my workflow became: grab a persona file and a task file, combine them in my prompt. The AI responses became more consistent, more focused, and frankly, more useful. Check how the Persona file links to tasks, this way I can re-use the prompts/instructions I've created already.

DRY. Encapsulate implementation details.

For a few weeks, this manual system was fantastic. My process looked like this:

  1. Open Claude Code
  2. Reference the persona file : "You are Ada, a Full Stack Developer..."
  3. Reference the task file for the action I need to perform: "Your task is to conduct a code review..."
  4. Add specific context: "Here's the code to review..."
  5. Get great results

It worked! But it was still... manual. Copy-paste file names... And I found myself doing this same multiple times per day.

Commands

One of the first tools I used was Claude Code Custom Slash Commands, which allowed me to create custom commands for my personas and tasks.

For example, instead of referencing the persona file I can use a command like this /be dev which will fetch the content on the dev persona file and make the Claude instance assume it.

You can use Frontmatter to define the command details.

Then I notice my workflow was looking like: /be dev and execute a code review on... or /be architect and create implementation plan, these tasks were references from the persona file itself so the agent could identify I defined a task file, read it and execute the action based on my instructions.

But, what if I wanted to simply execute a task? I didn't have a command to simply run a task and i found my self multiple times looking for the task file and manually referencing the path. I needed a way to simply run a task.

Execute task command:

Similar approach we used on the Persona command, we create a command to lookup for a task by its name and inject it on the context automatically.

Use commands to automate manual workflow that require a specific sequence of actions.

Scripts

Commands work like a charm, but eventually I found my self multiple times doing this workflow:

  1. Open Claude Code
  2. Run command /be <persona> and provide some instruction.

This is ok, but I found myself doing this multiple times per day. I needed a way to simply run a command and have the AI assume the persona and execute the instruction. I wrote a simple shell script:

claude-as.sh:

Aliases:

Usage:

Now as soon as I startup Claude, it has already load the necessaery context. Furthermore, this approach of appending a prompt to the system prompt worked much better than i expected so I used the same approach to load the memory files (see claude --append-system-prompt "$system_prompt").

The Subagent Revelation

As I used these workflows more, I noticed a problem. Long AI conversations would get... messy. My carefully crafted senior developer persona would get confused after 50 messages about different topics.

What if each task could run in its own clean environment?

This led me to experiment with subagents—basically having Claude spawn other instances of Claude for specific tasks. Each subagent gets:

  • A clean context
  • A specific persona
  • A focused task
  • Clear input/output boundaries

The main conversation stays clean while specialized agents handle specific work:

Parallel Processing: The Game Changer

Here's where things got really interesting. If I could spawn multiple subagents, why not run them in parallel?

Instead of researching one approach at a time, I could trigger multiple research streams:

  • Agent 1: Research database optimization approaches
  • Agent 2: Investigate caching strategies
  • Agent 3: Explore API design patterns

All working simultaneously, each with their own expertise, reporting back when done.

This wasn't just automation anymore—this was AI team coordination.

Workflow Orchestration

Once I had personas, tasks, slash commands, and parallel subagents working, the logical next step was obvious: orchestrate entire workflows.

Why manually coordinate multiple agents when I could define the whole process?

feature-implementation.yml:

Now I could trigger an entire feature development workflow with a single command:

Multiple specialized AI agents would coordinate to analyze, design, implement, and test a complete feature. All I had to do was define the workflow once.

What I Learned Building This System

Looking back at this journey from simple markdown files to orchestrated AI workflows, here are the key insights:

Start Stupidly Simple

Don't try to build the perfect system from day one. I started with markdown files in folders. That taught me what I actually needed before I built complex automation.

Manual First, Automate Second

Understanding your manual workflow is crucial. You can't automate what you don't fully understand. My weeks of copy-pasting taught me exactly which parts needed automation.

Iteration Beats Planning

Each stage of this system evolved from the pain points of the previous stage. File organization → Personas → Scripts → CLI tools → Slash commands → Subagents → Workflows. Each step solved a real problem I was experiencing.

Integration Matters More Than Features

The breakthrough moments weren't about adding features—they were about integrating with existing workflows. Command line tools, slash commands, and shell integration made the difference between "cool demo" and "daily driver."

Specialization Scales Better Than Generalization

One flexible, general-purpose AI assistant becomes chaotic with complex tasks. Multiple specialized agents with focused expertise handle complexity much better.

You Don't Need to Build Everything at Once

Here's what I want you to take away from this: You don't need to build a complete AI orchestration framework to get massive productivity gains.

Start with a folder of markdown files. Create a persona or two. Write a simple shell script. Each step will teach you what the next step should be.

The teams building these systems incrementally, learning at each stage, will have better systems than those trying to architect the perfect solution upfront.

The revolution starts with creating your first task template file. Everything else is just natural evolution from there.

The Complete Framework

If you want to see the complete implementation of this system, I've open-sourced the entire framework on GitHub: .ai - AI Agent Framework

The repository includes:

  • Pre-built Personas: Developer, Architect, Analyst, QA, DevOps, Product Owner, UX Expert, and Writer
  • Task Templates: Implementation planning, code review, estimation, debugging, and more
  • Document Templates: Consistent output formats for specs, plans, and reports
  • Utility Scripts: Shell scripts for workflow automation
  • Complete Documentation: Setup guides and usage examples

The framework is designed to be extensible—you can fork it, customize the personas to match your team's expertise, add your own tasks, and build on top of it.

Key Features

Template-Driven Generation:

  • Placeholder replacement for dynamic content
  • Conditional sections based on context
  • Repeatable blocks for structured output
  • Embedded AI instructions for consistent results

Multi-Agent Coordination:

  • Each persona has defined roles and responsibilities
  • Tasks can reference other tasks for complex workflows
  • Agents maintain context isolation for focused results

Workflow Orchestration:

  1. Story Creation: Define high-level goals and requirements
  2. Implementation: Execute tasks sequentially with proper testing
  3. Review and Deployment: Validate quality and release

Your Next Step: Fork the .ai repository, customize a persona to match your style, and try executing one task. Start simple—maybe a code review or an implementation plan. Then build from there.

The future of AI-assisted development is component-based and workflow-driven, but it starts with understanding your current manual processes first.