GLSL rendering for speed by calebmadrigal · Pull Request #4 · calebmadrigal/fuzzygraph

3 min read Original article ↗

added 6 commits

December 2, 2025 00:48
This entire commit was generate by ChatGPT Codex with the following 2 prompts:

PROMPT 1: Rewrite fuzzygraph.js to use GLSL for rendering fuzzy graphs. Keep the following function names, and continue to export them: parseEquationString, displayGraph, calcWindowBounds, getXWidth, makeLinearMapper. In index.html, make the fewest changes possible to make it work. Keep the vertex shader simple. Most of the work should be done as a fragment shader. Call this file fuzzygraph_glsl.js.

PROMPT 2: This seems to work, but is very slow - no speedup over the purely cpu-bound original version. The calculateFuncForWindow function is still using a CPU-bound 2-deep for-loop. This code should be moved to glsl also. Refactor things more if need be to move more of the computation (especially the most CPU-bound parts like those executed for every pixel) to GLSL code.
Bugs: inverted y axis, invert color not working, graph off-center initially

This commit was done by ChatGPT Codex using this prompt: That looks great. Good job AI. There are a few remaining bugs though. The image is inverted - please invert y axis. Also, the Invert Color functionality is not working. Lastly, when the graph first loads, it is off-center. But when I hit the home button, it centers correctly. Please fix these bugs.
Fix done by ChatGPT Codex with the following prompt: The error shown with the mouse goes over the canvas is inverted on the y-axis. Fix this.
The min and max override handling is slightly different with the GLSL implementation...

In the CPU renderer (fuzzygraph.js), the min/max values (including overrides) are passed through the fuzzy transfer before building the colormap: getColormap receives fuzzyModifier(minValue) and fuzzyModifier(maxValue). This means any override is first transformed by the fuzzy power/binary mapping before normalization.

In the GLSL renderer (fuzzygraph_glsl.js), the fragment shader applies the fuzzy transfer to each sampled value but normalizes using the raw uMin/uMax uniforms provided by the overrides. The uniforms are not pre-processed by the fuzzy transfer, so overrides are compared directly against the transformed values, leading to different min/max behavior compared to the CPU path.

I need to see which is the more preferred way to handle the min and max overrides and then either change it or leave it.
Used ChatGPT Codex to make this change with the following 2 prompts:

PROMPT 1: Those fixes worked. There seems to be a slightly different behavior in how min and max overrides are working. I'm not sure if in the bug is in the new code (fuzzygraph_glsl.js) or the previous code (in fuzzygraph.js). Can you identify what the difference is?

PROMPT 2: Based on the differences described under your "Difference in min/max handling" above, modify the glsl version of fuzzygraph so it handles the min and max overrides as the old version did