Press enter or click to view image in full size
April 9th, 2025
Visual Studio Code has just released GitHub Copilot Agent Mode to all users this week. It’s an AI assistant tool that can read your entire codebase, making prompt-based programming much more accessible. I’ve tried a lot of different approaches to working with the agent, so I wanted to show you which approach works best for me at the moment.
Disclaimer: This post should only be seen as a simple show-and-tell. I’m not trying to convince anyone to switch to prompt-assisted coding. I’m still figuring out when to use prompt-assisted coding and when to code it myself. Sometimes it’s amazing what the LLM comes up with, and sometimes it’s just a waste of time.
Prerequisites:
- Activate Agent Mode as specified in the official VS Code docs
- I’m using the paid GitHub Copilot plan, so please check if this approach also works with the free version.
Step 1: Writing of your Idea
Open your workspace in VSCode. This can be either a new project or an existing one. Save your idea as an idea.md file. The bigger the idea, e.g. “Rewrite everything in Rust”, the less likely it is that the result will work. On the other hand, if you are starting from scratch, or if you are early in the project, the agent will be quite solid in creating lots of boilerplate code. If there is anything associated with it, requirements, specifications, bug reports, log traces, just copy it all into the file. The more specific you are, the less you’ll have to tweak later.
Step 2: Create Specification
Let the prompting begin! The first step is to create a detailed specification of your idea and save it in specification.md. Use agent mode and the best model available to you. I’m currently using Claude 3.7 Sonnet, but there are other strong models available. GPT-4o doesn’t work very well at this stage, in my experience.
When to use Edit mode: If you know exactly which files or folders you want to change, you can just add them (drag-and-drop works!) and use edit mode instead. This gives you more control over the context. Agent mode usually tries to use your entire codebase.
**Objective**
Create a step-by-step specification for the given idea in idea.md that can be handed off to developers.
The specification should be saved in specification.md and include the following sections:- Overview: A concise summary of the idea or feature.
- Goals: The main objectives.
- Requirements: Detailed functional and non-functional requirements.
- Assumptions: All assumptions made while drafting the spec (to be validated or refined later).
- Open Questions: Points needing further clarification or decisions.
- Step-by-Step Plan: A high-level roadmap of implementation steps.
**Instructions**
Incorporate any background or context specific to the project. Append or refine assumptions as needed.
Keep the structure modular, so any section can be easily updated by subsequent prompts.
Once the spec is generated, allow for iterative refinements in each section.
**Request**
Generate an initial draft of specification.md based on the above structure.
Make sure each requirement or assumption is clearly stated.
Ensure the plan is broken down into logical, actionable steps.
Step 3: Open Questions
Usually the agent has a lot of open questions that you didn’t specify in your idea. These are often the same questions you’ll have to answer when you implement it yourself, so it’s usually quite enlightening. The next prompt will go through the questions one by one. You can use Agent mode or Edit mode to do this. Edit mode, with only the specification.md file attached as context, should generally be sufficient and the fastest. You can also use GPT-4o for this step to save some resources.
Let's systematically review each Open Question from the spec one by one.**Open Question**
• Restate the first unresolved question in your own words.
• Offer potential answers.
• Ask if we should confirm it, provide more details, or revise it.
*(Stop here and wait for user input)*
After receiving your input:
1. Update the appropriate section in the specification (Requirements, Assumptions, etc.) with the resolved information
2. Mark the question as resolved in the Open Questions section
3. Immediately continue to the next unresolved question, following the same process
Continue this cycle until all Open Questions are resolved.
**Example workflow:**
1. Present Question 1 and wait for input
2. After receiving input:
- Add details to Requirements section
- Mark Question 1 as resolved
3. Present Question 2 and wait for input
4. And so on...
In the end all open questions should be resolved.
Step 4: Verify Specification
This is a good time to use some reasoning capabilities. I like to use OpenAI o1 or Claude 3.7 Sonnet Thinking to check the specification with a more advanced model. Select the Ask mode to use reasoning models. It’s great to check that everything seems to be right, but it usually adds more questions that are less relevant, which I usually discard. This is also a good time to proofread the specification yourself.
Review the updated specification.md and compare it with idea.md to confirm they remain aligned.
Identify any unanswered questions or clarifications needed before we begin implementation,
and list them as an "Open Questions" snippet so they can be added to the file if necessary.If this leads to some important open questions, go back to “Step 3: Open Questions”.
Step 5: Implementation Time
This step relies heavily on the specifications created earlier. The better the implementation steps are defined, the better the execution will work.
# Implementation with Test-Driven DevelopmentRefer to the "Implementation Steps" in `specification.md`.
## Process:
1. Identify the next incomplete step in `specification.md`. If none are marked as completed, start at step 1.
2. For each implementation step:
- First, write tests for the functionality where applicable (following test-driven development principles)
- Implement the required code or updates to make the tests pass
- Run the tests to verify the implementation works as expected
- Include test results in your response
3. When a step is successfully implemented and verified with tests:
- Mark the step as completed in `specification.md`
- Report the test results and any observations
4. If all steps are completed, respond that no more steps remain.
After each implementation, run the tests and report results to confirm functionality works as expected.
This is the prompt I update the most. It’s difficult to find a balance between making progress and checking that everything is working. This usually means that both the agent and you are working at the same time. In the beginning, I used to let it do the git commit as well, but that slowed down the process and I often had to change the commits again. So what I do now is let it cook, commit when it’s ready (so the agent can update the same files in the next steps), and then review the commit while it cooks the next stuff.
Further Improvements
- Let the agent create your copilot instructions for you. It most often doesn’t understand what you want if you don’t give it any context, so you can make use of the
#fetchtool. This is often useful if you want to use something the LLM doesn’t know because of the training data cutoff. I usually do it the quick and dirty way, by just going into Agent mode and prompting:Create the github instructions as specified in #fetch https://docs.github.com/en/copilot/customizing-copilot/adding-repository-custom-instructions-for-github-copilot - Customise your instructions for Copilot in VS Code. For example, you can specify custom test instructions that improve the output of your generated tests.
- Create user prompts to easily reuse them in different workspaces. The prompts shown here are all stored in my User Prompts for easy access.
Final Thoughts
As always this is an ongoing learning process. I’m sure the described workflow will soon be outdated and will likely evolve quickly as new methods and tools are introduced. Special thanks to the Hacker News community for their valuable insights and discussions.