Show HN: I vibe-coded an aircraft AR tracking app and wasted weeks on AI bugs
Built an app entirely with Claude/AI assistance – backend (Django + C#), iOS frontend, server deployment, CI/CD pipeline, the works. Hosted on a single VPS. Postgres on VPS, Redis on VPS, Django on VPS, etc. The VPS is a VM I have in a proxmox server I have sitting in a datacenter (Dell R630, 1x Xeon 2697v4, 128GB memory, 6x 960GB Intel D3-S4610 with Optane SLOG, etc). No AWS/GCP/Vercel, etc. Incremental cost to me = $0/month. I skipped using CloudFlare tunnels for this - hoping I don't regret that.
What it does: Point your phone at the sky, see real-time aircraft info overlaid on your camera. ADS-B data from community feeders, WebSocket streaming, kinematic prediction to smooth positions between updates. No ARKit – just AVFoundation camera + CoreLocation/CoreMotion + math. SwiftUI overlays positioned via GPS/heading projection.
The humbling part: Spent 2 months debugging "FOV calibration issues." Built an 800-line calibration UI, a Flask debug server, Jupyter notebooks for pipeline analysis, extensive logging infrastructure. Hung a literal picture on the wall with a black rectangle of specific size to "calibrate" the FOV reported by my phone. The AI helped me build all of it beautifully.
The actual bug? A UI scale factor on the overlay boxes. Not FOV math. Not coordinate projection. Just scaleEffect() making things the wrong size. Commit message when I found it: "scale may have been fucking me over for a long time for 'fov issues'". Guess where the scaleEffect() function was introduced? That's right - AI generated. I asked it at one point something along the lines of "ok when you draw the boxes around the aircraft, make them smaller when the aircraft is farther away".
I went through 2-3 major model releases that I tested on this "hey I've been fighting a FOV bug for a while - can you please take a look and let me know if any issues jump out". Gemini 3 Pro, Opus 4.5, none of them found the "bug".
Takeaways from vibe-coding a full product:
- AI is incredible at building things fast – entire features in minutes. The entire UI, website, logo, etc, all AI. Claude Opus 4.5 kind of sucks at UI. Gemini 3 cleaned all that up.
- AI will also confidently help you debug the wrong thing for weeks
- Still need to know when to step back and question your assumptions
- Deleted 2,700 lines of debug infrastructure once I found the real bug
- Low performance? Just tell AI to rewrite it in a more performance language (load tested the process with 1000 connections - with Python/Django, tons of drops and latency spikes to 5000ms. Switched to c# and now it'll do 1000 and keep latency under 300ms)
Release process: painless, except for the test RevenueCat SDK key causing instacrash. Didn't test release locally. Approved in 6 minutes 2nd submission.
Question: what are people using to get super accurate heading out of Apple devices? The heading estimated error never drops below 10°. It's about 50/50 on being spot on vs not that close for the projections.
App link: https://apps.apple.com/us/app/skyspottr/id6756084687 Love the writeup and I feel your pain, I'm a big fan of using LLMs for coding (as you'll see from my history). > Still need to know when to step back and question your assumptions Did you ever let the AI question your assumptions? I've found myself in a rut before and just giving it the issue with as little of my own personal context has helped surface what I need. I'm curious how you found the bug in the first place? Was it during a vibe-code session or did you have a lightbulb moment? Cool app btw. Thanks! How I eventually found it was stripping stuff back layer by layer. And by that I mean I started with the raw camera feed and got to where things worked well in a different swift view. And then from there, peeled stuff back from the main process feature by feature. And then bam, aircraft were exactly where they should be (minus the compass inaccuracy). I even had stuff like drawing mountain peaks (I live near Denver) as "aircraft" to figure things out, determining different FOV at different zoom levels (a lot of AI keyed in where the boxes would move in one direction at low zooms, be completely correct at some middle zoom, and then in the opposite direction at high zoom). And that peeling back was me looking at each function to see what it did (I am a dev, but not for SwiftUI). So yep, can't vibe code it all!