Settings

Theme

Mind blow by NotebookLM generating podcast on LLM Sparsity

open.spotify.com

1 points by nrjpoddar 8 months ago · 2 comments · 1 min read

Reader

We tested its ability to explain sparsity in LLMs - a concept that’s highly technical and often misunderstood.

Inputs: Our GitHub repo ( link in comments) Research papers: Deja Vu & LLM in a Flash A Reddit thread rich in community commentary

The output was pure magic

A clean, cogent podcast that distills all of it - sparsity, memory access, retrieval patterns into something even non-ML researchers can grasp.

nrjpoddarOP 8 months ago

https://github.com/NimbleEdge/sparse_transformers and https://www.reddit.com/r/LocalLLaMA/comments/1l44lw8/sparse_... were the inputs

Leynos 8 months ago

I find that if I generate a Deepresearch paper in Gemini first, then pass that Deepresearch paper to NotebookLM, I get really good results if I don't have the sources to hand first.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection