An Apple I Emulator in shader language
shadertoy.comWow, I was expecting graphics in the style of Return of the Obra Dinn, but this is much more impressive.
Lucas Pope's work on bringing the 1-bit aesthetic into the modern age is fascinating in itself. One of my favorite excerpts is how Pope 'stabilized dither' in order to reduce the harsh effects that dithering moving images can create: https://forums.tigsource.com/index.php?topic=40832.msg136374...
Careful: froze my browser (Chromium) for 30secs or so
Shader language programming is one of most fascinating environments I know. Everything tells me that reading and writing data via texture coordinate lookups should be orders of magnitude slower than "normal" programming, yet it often is orders of magnitude faster.
To be clear, this particular emulator is a lot slower than an Apple I emulator on a normal CPU.
The delay is the shader compile time.
Very cool! This emulator even correctly mimics some of the hardware in the Apple 1 TV interface that strips out invalid characters. Many emulators miss this and simply replace them with a blank space. https://apple1org.wordpress.com/2012/02/19/a-common-apple-1-...
Nice! I'd pondered using this technique for a while; since each shader compute element is a fairly powerful processor on its own, this can effectively run one copy of the emulator for every pixel.
Unfortunately since emulation is an "embarrasingly serial" problem, it doesn't go particularly quickly.
Actually, not inherently serial. If you think about, an Apple consists of a lot of chips, which could be nicely modeled in say, Verilog, which is good for modelling parallel stuff. But it does not lend itself well to shader language.
To clarify: it does not lend itself well to ShaderToy, because the computational model is hamstrung by the fact that the threads can't communicate with each other within a draw call. On a more modern GPU platform with compute shaders, you get a lot more of that, and I think interesting things will happen. I'm personally really looking forward to WebGPU, as I think it will make a lot of this stuff much more accessible.
Unfortunately, after spending a lot of time in the committee, I think it's ultimately doomed. Chrome has WebGL-with-compute-shaders behind a flag. Really, they should have standardized and shipped that years ago.
Even when you emulate the different chips in parallel, what usually kills performance is the synchronization between them.
I can't enter "+" on my US QUERTY keyboard. It is shift-equals, but both the shifted and unshifted show up as plain old equals in the emulator.
So I'm running macOS in KVM with pcie passthrough (Vega64) in order to emulate an Apple I on the GPU. What a tremendous era we live in ;D
I've been trying to understand the tao of shaders for a while. Any good resources (besides The Book of Shaders)?
Can't run it on Linux in Chrome or Firefox, even though the console says WebGL support is available.
Shader compilers are pretty bad an result depends on browser and driver. There is plenty of platform differences when accessing uninitialized memory, using more complicated constructs like structures, performing math operation with undefined result, incorrectly defining if function argument is in,out or inout.
Oh hmm, that's interesting. Thanks.
The shaders on https://shaderfrog.com/app all work fine for me and those looked pretty complex. Maybe they're not using the same type of "shader compilers"? Or perhaps making a CPU emulator with shader compilers is just more complicated...
That sounds like a Linux problem.