Building a Tweetdeck-style interface for LiveATC feeds - Ian Servin

6 min read Original article ↗

If you’re an aviation nerd, you know how powerful ADS-B and flight tracking sites can be. However, ADS-B does have its limitations. In some situations, we have to turn to other methods to track, identify and characterize operations of non ADS-B out targets. ACARS is one potential path for gaining information we can’t get from flight tracking, but another tool is simply listening to Air Traffic Control radio.

While local, state and federal police and military aircraft are permitted to disable ADS-B out, in most circumstances they still maintain communication with Air Traffic Controllers to navigate the NAS. We can use traditional radios, Software Defined Radios (SDRs) and live streamed broadcasts like on LiveATC.net to listen in and monitor an area for non-ADS-B operators.

The motivation to build this tool now came specifically from recent events in Minneapolis. With the surge of DHS, ICE, and CBP assets operating in the area, the airspace became host to several federal operators that were largely invisible on flight tracking maps including the Department of Homeland Security, the Federal Bureau of Investigation and the Drug Enforcement Agency. To understand the scope of these agencies’ operations, we needed to hear them and ideally we needed to monitor multiple feeds at once.

But listening to ATC radio for that needle-in-a-haystack moment is inefficient. You have to monitor hours of traffic just to catch a five-second exchange where a “TROY” (DHS) or “KONA” (FBI) callsign checks in. It’s also difficult to monitor multiple audio feeds at once.

I wanted a better way to mine this audio data—a way to “doomscroll” radio traffic the way I do Bluesky. So, I built ATC Deck. The goal was simple: create a dashboard that transcribes multiple ATC audio streams simultaneously.

Instead of a single continuous audio player, ATC Deck listens to multiple feeds from LiveATC.net. It detects voice activity, chops the audio into individual clips, and runs them through a local instance of OpenAI Whisper that uses prompt seeding to improve accuracy.

The result is a columnar, real-time feed of every transmission, transcribed and timestamped. The machine generated transcripts are imperfect, and vary in quality widely depending on the quality of the incoming audio, but they are good enough to provide situational awareness in many circumstances.

Screenshot of ATC Deck showing a user monitoring two Minneapolis Approach feeds. Several transmissions involving a known DHS callsign “TROY” are highlighted.

How It Works Under the Hood

The system is architected as a FastAPI (Python) backend serving a vanilla JavaScript frontend. While the architecture is simple, the challenge lies in the real-time pipeline required to turn a continuous radio stream into discrete, readable messages.

The “Browser Handshake” (Bypassing WAF)

LiveATC protects its streams with Cloudflare’s anti-bot measures, which prevents standard HTTP clients (like curl or requests) from accessing the stream data. To bypass this, I implemented a custom scraper using cloudscraper. This acts as a client-side agent that mimics a legitimate browser’s TLS fingerprint and solves the JavaScript challenges presented by Cloudflare.

Instead of just grabbing a URL, the system effectively “browses” the search results page, parses the HTML to find the dynamic stream_id and PLS (playlist) URLs, and then resolves those playlists to the underlying raw MP3 stream servers.

The Audio Pipeline (Ingest & VAD)
Once the backend acquires a valid stream URL, it spawns a worker thread that pipes the audio through FFmpeg to convert it into raw PCM data (16kHz, 16-bit mono) in real-time.

This raw audio is fed into a custom Voice Activity Detector (VAD) loop. Since ATC audio is mostly silence with sudden bursts of speech, sending the entire stream to an AI model would be inefficient and also lead to hallucinations and artifacts that could ruin the entire session.

  • Energy Thresholding: The system calculates the RMS (Root Mean Square) amplitude of 50ms audio chunks to detect when the noise floor rises above a specific threshold (currently set to ignore background static).
  • Ring Buffer: To ensure we don’t cut off the first syllable of a transmission, the system maintains a 1-second “pre-roll” ring buffer. When speech is detected, this buffer is flushed to the recording first.

Context-Aware Transcription
Isolated audio clips are passed to a local instance of OpenAI’s Whisper (specifically the medium.en model).

A critical component here is Prompt Seeding. Whisper is a general-purpose model, so out of the box, it often misinterprets aviation jargon (hearing “niner” as “nine” or “squawk” as “squat”). I inject a specialized AVIATION_PROMPT—a context string containing the NATO phonetic alphabet, common airline callsigns (United, Delta, Brickyard), and ATC phraseology—into the model context. This “primes” the neural network to expect aviation terms, significantly increasing accuracy.

The Real-Time Waterfall
Finally, when a transcript is ready, the backend broadcasts a JSON payload containing the text, metadata, and a link to the extracted audio clip over a WebSocket connection. The frontend receives this event and dynamically inserts the new block into the DOM, creating the scrolling “waterfall” UI.

Keyword Alerting and Recording

The “TweetDeck” layout allows me to multiple frequencies or feed at once, but the keyword alerting is the key feature to help scale up the detection of operations of interest. I built the keyword alerting system with three tiers:

  • Critical (Red): For specific callsigns I’m hunting (e.g., “TROY”, “OMAHA”) or emergency phrases like “Mayday.”
  • High (Yellow): For operational context, like “Helicopter,” “Orbit,” or “Loiter.”
  • Informational (Blue): More common and innocuous language largely focused on phraseology around VFR traffic that by itself is not significant but in certain situations might add helpful context.

Because every transmission is saved as a discrete clip, the tool effectively works like a DVR. If a police helicopter spends 20 minutes coordinating with a ground unit, I can multi-select all those clips and download them as a single, concatenated MP3 “mixtape” of the event or I can download individual transmissions as desired.

Why this isn’t public (yet)

As much as I’d love to open-source this tool, there is a catch: LiveATC really doesn’t like tools like this.

The platform is designed for human listeners via their app or website, not for automated scrapers pulling down multiple simultaneous streams 24/7. While the “frontend agent” workaround functions for a single user, releasing this tool publicly would likely lead to aggressive patching on their end or IP bans for users. This is an arms race I do not wish to be part of.

For now, ATC Deck remains a personal research tool and a proof of concept. If you are a researcher interested in learning more, please reach out via Bluesky or Signal.