An interactive look at how a video signal is made
blog.pizzabox.computerCool thing about analog video was the ability to cheaply store a Lot of data using common VHS deck. Japanese started recording Digital PCM on Video tapes in 1977, long before CDs. This is why CD audio has 44.1 sampling rate - common divisor for fitting 3 samples per line of video in both 50 and 60 field rates.
https://www.youtube.com/watch?v=DLS9sQUxVlI
https://youtu.be/-ZJmQQ9OCtI?t=7240
In 1992 enterprising Russian company released a line of ISA controllers turning VHS into a streamer https://en.wikipedia.org/wiki/ArVid up to 4GB per tape at the time 4GB hard drive cost over $1000.
> This is why CD audio has 44.1 sampling rate - common divisor for fitting 3 samples per line of video in both 50 and 60 field rates.
Indeed. Naively multiplying the line frequency (15734Hz in NTSC, 15625 in PAL) by 3 doesn't fit, but if some room is made to make a really valid video signal, which implies 50 or 60 gaps per second, and some logic to play samples at correct point in time, then it fits.
Crazy. I’d always thought it was because of Nyquist and the 20k ish human hearing limit.
This is pretty cool.
For about two to three years now, I've leaned very heavily into analog video signals as a hobby. It culminated into me making not just a software PAL decoder, but my own fully compliant PAL "graphics card" with an FPGA and analog circuitry. PAL might be one of the most complicated analog video signals next to SECAM (though PAL is not that far from NTSC if you understand it well already), and I wanted to really understand every step in it, including all of the math.
I of course test everything with the once ubiquitous[1] PM5544 test pattern, which is described in noticeable detail here in case you were every wondering why it looks the way it does: https://web.archive.org/web/20190907170308/http://www.radios...
It's satisfying to really understand analog TV now. I watched a lot of TV through my life, and always had the thought that while I use it so much, I had no good idea how the TV in front of me actually worked. It's also a bit bittersweet, because while generating NTSC or PAL is still useful in existing composite video situations (the motivator for the PAL graphics card was another project where I needed composite output), there are almost no broadcasts left, and it seems rather outdated in general.
One day I might make a website that explains how NTSC and PAL work in detailed but understandable way, but I keep putting it off indefinitely.
[1] If you are from Europe or other PAL countries. In the US, I think simple SMTPE color bars were more common, and a lot of the elements in the PM5544 test card don't matter for NTSC.
Kindred souls. About three years ago, I did the same, but with NTSC. In my case I actually built a digitization board that sampled the signal from a composite cable and dumped the raw samples over USB using the Cypress EZ-USB2 chip. I didn't want to "cheat" with a chip that did the synchronization etc. :)
I never really finished it, but I threw a little bit of it on GitHub: https://github.com/gabesk/tv_python is the prototyping version (the raw data is a picture of Lena on a Raspberry Pi composite output).
Also as an aside, if anyone ever wants an example of how to stream data from an isochronous endpoint on Windows, this took me way to long to figure out: https://github.com/gabesk/tv/blob/master/actual_implementati... (see usb_reading_thread routine) as well as this blog post: https://gabesk.blogspot.com/2018/08/streaming-from-isochrono... (that version does real-time black and white, but I never finished real-time color as my laptop at the time couldn't handle it with my poorly optimized decoding routines).
You all have inspired me to finish this properly and publish it. Maybe even as a top level post on HN.
Nice. I do know the feeling of "not wanting to cheat" by using existing solutions, where's the fun in that...
My PAL decoder is in Matlab, and so slow that it's mostly useless (because not realtime), but convenient for trying things out with the data, since you're already in Matlab... but the real goal was the eventually reached PAL generator anyway. That even found a practical use as well.
I think you should publish!
By the way, if you ever feel the itch to go back to it, try adding PAL support. It's NTSC with a few fun twists, nothing mind-bending though. The most "advanced" part of it is the delay line (which in practice will just be a line of memory).
> One day I might make a website that explains how NTSC and PAL work in detailed but understandable way, but I keep putting it off indefinitely.
I've been meaning to do the same thing.
I had to learn a lot about NTSC/PAL just so I could calculate the correct aspect ratios for GameCube/Wii games in Dolphin Emulator.
It was a little annoying because none of the standards actually seemed to define which part of the signal had a 4:3 aspect ratio.
I actually implemented a small idealized emulation of the flyback transformers so I could still produce results if a game (or more likely homebrew) was to reconfigure VI for a non-standard timings with non number of vertical lines.
Though you probably know more about color encoding than me.
The main thing I really want to explain to people is how the 240p signal that older consoles output work when the TV is "expecting" an interlaced signal.
This is a great visualization. I remember playing with generating VGA through bit-banging an Arduino - it was my course project for a class in the first year of my PhD. I sadly never wrote up a detailed explanation of it, but a picture of the setup survives as my Twitter header: https://pbs.twimg.com/profile_banners/20347234/1463010760/15...
The pixel clock (640x480 at 60fps) is supposed to be ~25 MHz, while the little Arduino Nano only runs at 16 MHz, so the implementation cheats a bit and gets the timings somewhat wrong (plus, it smears the pixels a bit, but I hid this by having relatively blocky graphics). Some TVs I tested with didn't like this signal much, but luckily the projector that I did the final demo on didn't mind the spec violation. For the actual demo I had a simplistic version of Tetris running (with sound!); all the computation happened during hblank and vblank because the main picture time was just a handwritten assembly loop shoving pixels out.
Ben Eater has a great video[1] about building a video card from simple gates and other basic components. It goes into some detail about how a VGA signal is generated.
There's also https://odysee.com/@JamesSharman:b/introduction-vga-from-scr... by James Sharman that goes over the first steps of how he is adding VGA to his customer computer.
I always want to make a nice-looking infograph for Wikipedia on the TV color bars, with colors, labels, and explanations of the staircase waveforms, black level, color burst, etc (basically combining all annotations in a textbook to a single image). Most people have only seen the color bars as an image, but the more interesting aspects can only be seen on a TV waveform monitor, if the signal is properly adjusted, you can see the "staircase" waveform align to the etched mark on the CRT (not a particularly good graph: https://www.maximintegrated.com/content/dam/images/design/te...). Currently, Wikipedia articles on NTSC/PAL don't have any explanation on how an analog video signal is made. Too bad that I don't know anything about image editing (I did export a waveform from the ADS simulator, waiting to be visualized indefinitely).
Also, if anyone has a high quality photo of the Philips PM5544 video signal generator, please upload it to Wikipedia. This machine is an important artifact of popular culture, yet photos of the signal generator itself is uncommon on the web (and many people mistakenly believed the PM5544 is just a test card, not a signal generator), but so far there's no high-quality photo under a free license. (Or leave a comment if you have the actual machine, I'd pay $1000 for that. If I ever get the machine, I'll take a photo and upload it, and write a blog post about how the circles and lines are drawn by the analog circuitry). Finally, manuals, manuals and manuals: I'm willing to buy any documents about the Philips PM5544 (or any notable signal generators) to get them digitized. Currently the only document on the web is an issue of Philips Electronic Measuring and Microwave Notes [0] that only briefly mentions a tiny bit of its inner working.
[0] https://frank.pocnet.net/other/sos/Philips_PM5544_PM3400_Pub...
> Or leave a comment if you have the actual machine, I'd pay $1000 for that.
I am sure the specific significance of the PM5544 is what you are looking for, but FYI there are plenty of PM5570 and PM5640 in Germany and UK ebay seller listings.
Just thought I would mention that if that is of interest to you.
Very interesting. I'm just learning about VGA. On the oscilloscope, I assume the front and back porch is clearly visible on the left and right hand side. What's the fuzzy bit on the left, just before the picture starts?
Edit: and which bit is the horizontal retrace? The bit on the far left hand side, before the front porch?
That's the color burst. NTSC faced the problem to add color to an existing monochrome broadcast system while a) staying compatible and b) using the same bandwidth as the previous signal.
The solution was to modulate the color difference signal (two, actually, through quadrature amplitude modulation) to the existing signal, which was from then on called "luminance" (basically the "brightness information" of the picture).
The color burst not only indicates that the signal is color, it is also what the receiver locks onto to determine the phase of the color information modulated on the luminance signal. Quadrature amplitude modulation is "carrierless", so you need the color burst as a reference.
In NTSC, if you watch the signal live on an oscilloscope, the color burst usually appears as a static shape. In PAL, it's a blurry mess, because it purposefully not only flips 180° with every line, it also gets shifted a bit. That's the result of some schemes added from NTSC to PAL to make the picture more palpable. Since the color signal (mostly[1]) occupies the same bandwidth as the luminance signal, there is noticeable crosstalk between the two.
It also works against phase errors: People who grew up in the US will know the "tint control" on the TV used to correct the hue of the picture. In PAL-land, that was not a thing anymore.
[1] The color signal tries to occupy the gaps in the spectrum of the luminance signal, which in most realistic images are the result of the line-based nature of video signals. However it can never be perfect, there will practically always be overlap such that luminance and chrominance cannot be fully separated.
The color burst, if I'm not mistaken:
https://www.eetimes.com/wp-content/uploads/media-1050360-c01...
> On the oscilloscope, I assume the front and back porch is clearly visible on the left and right hand side.
Not quite.
The front porch is actually the black level on the right hand side (I know, this confused me at first, too), at the end of the line, when the signal drops back to the black level. The sync pulse is the lower-than-black bit in the middle. The back porch is the portion thereafter, containing the fuzzy bit. Collectively these are called the horizontal blanking interval.
The fuzzy bit is the color burst. Color TV was a hack on the original B&W design. To put color information into the signal without making a whole new signal that existing sets couldn't decode, the color information is ̶p̶h̶a̶s̶e̶-̶m̶o̶d̶u̶l̶a̶t̶e̶d̶ quadrature-modulated upon the luma signal. You need a phase reference to decode that information, and that's what the color burst is there for. If this were a color demo, rather than the nice, neat square waves, we'd see the rest of the signal line looking similarly fuzzy. Wikipedia has a nice diagram[1].
Why is the sync pulse below the black level? Well, consider the original B&W signal. The brightness of any portion of the picture was controlled by how strong the cathode was emitting when it swept across that portion of the screen. The intensity of the picture tube's cathode is controlled directly by voltage level of the video signal[2] during the active display period (the portion of the line not part of the horizontal blanking interval). So great, we've come to the end of a line, shut off the cathode after it has swept one line across the display tube, and need to reposition it to sweep another line. Why can't we just keep the signal at the black level instead of dipping below black? Well, analog signals and circuits can be kinda fiddly. Components drift or go bad. People think they know better than the warning of no user-serviceable parts. So it's entirely possible that due to a bad or misconfigured circuit, the cathode could actually not be quite off when fed a signal at the black level. In one of those cases, if the cathode were still on during retrace, the user would see slanting lines drawn across the picture, and might rightly get annoyed at this. So, to ensure that, no really, the cathode is actually off for reals during retrace, the signal is pulled below the black level so there will be no retrace lines drawn on the picture on any sets where the brightness levels are iffy for whatever reason.[3]
1. https://en.wikipedia.org/wiki/Analog_television#Structure_of....
2. You could demonstrate this by raising/lowering the level of the square wave from/to the black/white level on the signal in this demo, and get a neat gradient effect.
3. On some sets, you actually can still force the cathode on during retrace and see the retrace lines by cranking the brightness all the way up. Probably not the best thing for the life of the tube, though.
Edit: update with corrections in anyfoo's reply, below. Also added the link to the Wikipedia diagram I had originally intended to include.
More Edit: Added an explanation of why the sync pulse is below the black level (blacker-than-black), because even though parent didn't ask, I'm pretty sure someone else is wondering (I know I used to).
Good explanation, though "phase-modulated" is not the whole picture (hah), it's most correctly quadrature amplitude modulated, which is basically a fancy way of saying that there is a modulation in phase and amplitude.
The two signals in the quadrature amplitude modulation are the two color difference signals. While that is the intent and also usually how encoders and decoders are actually implemented, most elegantly the math works out to a different interpretation as well: You can see the phase as the hue (i.e. what color), and the amplitude as the saturation (i.e. how much color).
That alternative interpretation works extremely well to explain the whole scheme without having to explain how and why quadrature amplitude modulation works (which needs a lot of math).
What causes the "spikes" in the trace on all the lines? First They're regular but in different positions - first I thought it was a grid pattern. If my math works out, it is a ~53kHz pulse.