Mind blow by NotebookLM generating podcast on LLM Sparsity
open.spotify.comWe tested its ability to explain sparsity in LLMs - a concept that’s highly technical and often misunderstood.
Inputs: Our GitHub repo ( link in comments) Research papers: Deja Vu & LLM in a Flash A Reddit thread rich in community commentary
The output was pure magic
A clean, cogent podcast that distills all of it - sparsity, memory access, retrieval patterns into something even non-ML researchers can grasp. I find that if I generate a Deepresearch paper in Gemini first, then pass that Deepresearch paper to NotebookLM, I get really good results if I don't have the sources to hand first.