My GitHub Workflow Fixes Ten Random Files Every Night. It Keeps Finding Real Bugs.

1 min read Original article ↗

It picks a file, gathers the code around it, asks OpenAI for a patch, opens a PR, and then another AI reviews the result.

Heikki Hellgren

Press enter or click to view image in full size

Photo by Juanjo Jaramillo (https://unsplash.com/@juanjodev02) on Unsplash (https://unsplash.com)

Every night one GitHub workflow in our repository picks ten random source files, collects context from the files they use (including tests and nearby imports), asks the OpenAI API if anything looks wrong, and opens a pull request when it finds a fix worth making.

I built it expecting mostly noise. It found real problems fast enough that I stopped calling it an experiment.

We now have AI generating small fixes and a second AI workflow reviewing those pull requests before a human merges them. That sounds slightly ridiculous when written out. It is also one of the more useful maintenance automations I have added in a while.

Why I Built This At All

This started from a very normal frustration. We already had human code review, static analysis, tests, and a GitHub workflow that runs an AI review pass on pull requests. That helped on new changes. It did nothing for the quiet corners of the repo that nobody touched for weeks or months.