Settings

Theme

The Path to Ubiquitous AI

taalas.com

6 points by 2001zhaozhao 9 days ago · 3 comments

Reader

2001zhaozhaoOP 9 days ago

Saw this on /r/localllama

It's an LLM ASIC that runs one single LLM model at ridiculous speeds. It's a demonstration chip that runs Llama-3-8B at the moment but they're working on scaling it to larger models. I think it has very big implications on how AI will look like a few years from now. IMO the crucial question is whether they will get hard-limited by model size similarly to Cerebras

dust42 8 days ago

Interesting hardware but I wonder if it is capable of KV caching. Thus (only) useful for applications that have short context but would benefit from very low latency. Voice-to-voice applications may be a good example.

max8539 9 days ago

This is crazy! These chips could make high-reasoning models run so fast that they could generate lots of solution variants and automatically choose the best. Or you could have a smart chip in your home lab and run local models - fast, without needing a lot of expensive hardware or electricity

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection