Press enter or click to view image in full size
Most engineers know the familiar rhythm of a PR review: glance at the diff, hunt for logic mistakes, debate naming, rinse, repeat. It’s essential work, but it’s also incredibly easy to miss the small but critical bugs — off-by-one errors, edge case logic, subtle null handling problems — especially when PRs are large or rushed.
For teams like ours running a tight CI/CD pipeline (we use GitHub Actions), efficiency is everything. Every minute spent waiting for a human to spot a simple fix is a minute the code isn’t deployed and delivering value.
As we are already Cursor we naturally started using Bugbot from Cursor more and more as a first-pass reviewer on our pull requests: initially more out of curiosity as we had tried CodeRabbit in the past which we did not like. Bugbot is not replacing human reviewers, but it changes when and where we catch issues.
At a high level, Bugbot:
- Runs automatically on new PRs and on updates, analyzing just the diff with context about intent and code relationships. It can also be triggered manually with commands like
@cursor revieworbugbot run. - Comments directly on GitHub where it finds potential problems, complete with suggested fix ideas.
- Supports custom rules via
.cursor/BUGBOT.mdso you can encode team-specific guardrails or conventions.
Press enter or click to view image in full size
Here is a shortened version of our command:
# /add-linear-issue## Overview
The `/add-linear-issue` command adds a new issue to Linear, our project management tool. It adds a new issue to an existing project.
## Linear MCP Server
- Use the Linear MCP server to interact with Linear tickets
- Always use the Sunhat team ID: XYZ
- Fetch Linear issue details using the Linear MCP server
- Update Linear issue status using the Linear MCP server
- Assign Linear issues using the Linear MCP server
- Get list of available projects, labels, and users using the Linear MCP server
- Use the Linear MCP server to get the list of available projects, labels, and users we have in Linear
- An issue is either a story or a bug or a security issue.
## Steps
1. **Identify the issue**
- Determine the issue to add to Linear
- Determine the project to add the issue to it -> check our open projects in Linear and determine the most relevant project. Never create a new project.
- Determine the possible assignee for the issue -> check our users in Linear. If no user is explicitly assigned, then do not assign any user. Use partial matching to find the most relevant user.
- Determine the priority for the issue: use our legend of priorities.
- Determine the status for the issue: use our legend of statuses.
- Determine the labels for the issue -> use our default labels we have defined in Linear. Use maximum 3 labels and at least one label. Do not create new labels.
- Determine the estimated complexity for the issue -> use our legend of estimated sizes. Complexity depends on the amount of changes (files, lines of code, etc.) and uncertainty of work required (e.g. dependencies to backend) to complete the issue.
- Determine the description for the issue -> always use our default issue templates we have defined in Linear. Try to find the relevant code and mention it (file name and line number) in the description of the issue with a disclaimer (Markdown quote) that it may be outdated or wrong. Keep the description concise and to the point.
2. **Check if the issue already exists** in Linear. If it does, then ask me if I want to update the issue with the new information. If I say yes, then update the issue with the new information. If I say no, then do not update the issue.
3. **Create the issue** in Linear using the information from the step 1. If the issue already exists, then ask me if I want to update the issue with the new information. If I say yes, then update the issue with the new information. If I say no, then do not update the issue.
4. **Print the link to the issue in Linear.** so that I can click on it.
5. **Notify the reporter**: If there is an assignee and the reporter would be different from the assigned engineer, then create a FYI-comment in the created issue to inform the reporter.
## Example Workflow
1. Issue to add: `Add new validation rules for indicator values`
2. Project to add the issue to: `Customer Feedback`
3. Possible assignee for the issue: `John Doe`
4. Priority for the issue: `Medium`
5. Status for the issue: `Todo`
6. Labels for the issue: `Frontend, Backend`
7. Description for the issue: `Add the following validation rules for indicator values: greater than 100, less than 0, not a number`
## Legend
- Status: `Backlog`, `Todo`, `In Progress`, `In Review`, `Done`
- `Backlog`: Issues that are not yet started. When in doubt, use this.
- `Todo`: Issues that are planned to be done. When the issue has at least High priority, use this.
- `In Progress`: Issues that are currently being worked on. When the issue is already being worked on according to the text.
- `In Review`: Issues that are being reviewed by the team.
- `Done`: Issues that are already completed.
- Priority: `None`, `Low`, `Medium`, `High`, `Urgent`
- `None`: Issues that are not yet prioritized. When in doubt, use this.
- `Low`: Issues that are low priority. E.g. small visual glitches
- `Medium`: Issues that are medium priority. E.g. a new feature that is not critical but still important.
- `High`: Issues that are high priority. E.g. a new feature that is critical and important for at least one customer.
- `Urgent`: Issues that are urgent. E.g. security issues, downtime issues
- Estimate (t-shirt size): `XS`, `S`, `M`, `L`, `XL`
- `XS`: Issues that are very small and can be done in a few minutes. E.g. documentation changes, changing one button label, etc.
- `S`: e.g. adding a small convenience feature, a small bug fix, etc.
- `M`: a change that requires only frontend changes etc.
- `L`: a change that requires both frontend and backend changes, a new feature that requires a lot of refactoring, etc.
- `XL`: e.g. a new feature that requires both frontend and backend changes and a lot of refactoring and testing etc.
## Additional Context:
You may use the following context for the description of the issue:
- Our backend has Swagger documentation: link
- Our frontend: link
## Description Templates
- **Story template:**
```markdown
**🤔 Why and for who are we doing this?**
For:
Why:
**🏁 What do we need to do?**
**🚸 What is the worst that can happen (e.g. security, UX, performance)?**
**ℹ️ Any more info? (optional):**
```
- **Bug template:**
```markdown
**🚶♂️ Steps to reproduce:**
**💭 Expected behavior:**
**🐞 Actual behavior:**
**ℹ️ More Info / Screenshots / Video (optional):**
```
- **Security issue template:**
```markdown
**🤔 What is the vulnerability?**
**🏁 What do we need to do?**
Fix the vulnerability:
**ℹ️ Any more info? (optional):**
```
Benefits of using Bugbot in our development process
Bugbot sits right at the start of that process, essentially becoming the fastest, least-tiring ‘first reviewer.’
- Before CI: Bugbot scans the code on the PR.
- During CI: Our existing tests (like those run with Angular Testing Library and Testcontainers) catch functional regressions.
- After CI: Human reviewers focus on architectural decisions and complex logic.
We’ve noticed a few concrete benefits from incorporating Bugbot into our flow:
- Fewer “oh my bad” bugs late in review. When Bugbot flags a logic issue early, the human reviewer can focus on architecture and design trade-offs, not basic correctness.
- Better alignment on standards. Having auto-comments up front nudges authors toward patterns we care about before the first human comment lands.
- Faster iteration. If a suggestion can be fixed quickly (even automated via Cursor links), the turnaround on a clean, merge-ready PR improves.
And if something specific keeps cropping up in your project (think domain-specific invariants or architectural patterns), we can add it as a Bugbot rule and let the bot enforce it consistently. That means fewer repetitive comments from reviewers and more time spent on what actually matters. A recent example: leave a comment on the PR to remind the engineer to bump the manifest version of our browser extension if the change should trigger a browser new extension release.
As with any AI assistant, there are edge cases. Bugbot doesn’t always catch every problem, and it may not understand deep business logic the way a domain expert reviewer does. It does not eliminate the need for a code review performed by an engineer. But using it as an early filter — catching obvious bugs and surfacing fix proposals — has already saved us plenty of time.
Conclusion
If your team is serious about minimizing noise and catching bugs earlier in the cycle, giving Bugbot rules a real place in your development workflow is worth exploring. Let me know about your experiences.