Settings

Theme

2015 radio interview: AI as "high-level algebra" before Transformers and LLMs

doomlaser.com

2 points by doomlaser a month ago · 1 comment

Reader

doomlaserOP a month ago

Posting this in light of Sergey Brin’s Stanford comments last week about Google under-investing after publishing the Transformer paper. Not just in scaling compute, but in actually turning that invention into first-class LLM products.

I revisited a 2015 radio interview I did (months before OpenAI existed) where I tried to reason about AI as “high-level algebra.” I didn’t have today’s vocabulary, but the underlying intuition—intelligence as inference over math and incentives—ends up looking surprisingly close to how LLMs actually work.

Curious which parts people think aged well vs. clearly wrong.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection