Settings

Theme

Show HN: Eyot, A programming language where the GPU is just another thread

cowleyforniastudios.com

79 points by steeleduncan 11 days ago · 21 comments

Reader

teleforce 10 days ago

Perhaps any new language targetting GPU acceleration would consider TILE based concept and primitive recently supported by major GPU vendors including Nvidia [1],[2],[3],[4].

For more generic GPU targets there's TRITON [5],[6].

[1] NVIDIA CUDA 13.1 Powers Next-Gen GPU Programming with NVIDIA CUDA Tile and Performance Gains:

https://developer.nvidia.com/blog/nvidia-cuda-13-1-powers-ne...

[2] Nvidia Tilus: A Tile-Level GPU Kernel Programming Language:

https://github.com/NVIDIA/tilus

[3] Simplify GPU Programming with NVIDIA CUDA Tile in Python:

https://developer.nvidia.com/blog/simplify-gpu-programming-w...

[4] Tile Language:

https://github.com/tile-ai/tilelang

[5] Triton: An Intermediate Language and Compiler for Tiled Neural Network Computations:

https://dl.acm.org/doi/10.1145/3315508.3329973

[6] Triton:

https://github.com/triton-lang/triton

MeteorMarc 11 days ago

That is fun: it lends c-style block markers (curly braces) and python-style line separation (new lines). No objection.

shubhamintech 11 days ago

The latency point matters more than it looks imo like the GPU work isn't just async CPU work at a different speed, the cost model is completely different. In LLM inference, the hard scheduling problem is batching non-uniform requests where prompt lengths and generation lengths vary, and treating that like normal thread scheduling leads to terrible utilization. Would be curious if Eyot has anything to say about non-uniform work units.

  • steeleduncanOP 11 days ago

    Not right now, it is far too early days. I'm currently working through bugs, and missing stdlib, to get a simple backpropagation network efficient. Once I'm happy with that I'd like to move onto more complex models.

    • CyberDildonics 10 days ago

      What is the new language doing that can't be done with an already established language that is worth sacrificing an entire standard library?

sourcegrift 11 days ago

Don't mean to be rust fanatic or whatever but anyone know of anything similar for rust?

LorenDB 11 days ago

This reminds me that I'd love to see SYCL get more love. Right now, out of the computer hardware manufacturers, it seems that only Intel is putting any effort into it.

  • jamiejquinn 10 days ago

    CUDA having had such a wide moat for so long has completely warped the GPU software ecosystem. There just isn't any incentive for Nvidia to meaningfully contribute to any external, standards-driven effort like SYCL or OpenCL. Real shame because it leads to a tonne of duplicated effort as AMD and Intel try to reimplement the exact same libraries as Nvidia (and usually worse because neither seem to prioritise good software for whatever reason).

CyberDildonics 11 days ago

Every time someone does something with threading and makes it a language feature it always seems like it could just be done with stock C++.

Whatever this is doing could be wrapped up in another language.

Either way it's arguable that is even a good idea, since dealing with a regular thread in the same memory space, getting data to and from the GPU and doing computations on the GPU are all completely separate and have different latency characteristics.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection