Settings

Theme

Show HN: Factual AI Q&A – Answers based on Huberman Lab transcripts

huberman.rile.yt

120 points by rileyt 3 years ago · 44 comments · 1 min read

Reader

This is a quick prototype I built for semantic search and factual question answering using embeddings and GPT-3.

It tries to solve the LLM hallucination issue by guiding it only to answer questions from the given context instead of making things up. If you ask something not covered in an episode, it should say that it doesn't know rather than providing a plausible, but potentially incorrect response.

It uses Whisper to transcribe, text-embedding-ada-002 to embed, Pinecone.io to search, and text-davinci-003 to generate the answer.

More examples and explanations here: https://twitter.com/rileytomasek/status/1603854647575384067

jonathan-adly 3 years ago

I did the same for the FDA drug label database and 100% believe that this the future for search. Semantic search layer for context then the large language layer for human answers.

Tip - you don’t actually need GPT-3 level embedding for a decent semantic search. Sentence transformers paired with one of their models is good enough.

I like this: https://huggingface.co/sentence-transformers/multi-qa-MiniLM... - since it’s very light.

Also, perhaps I am an idiot but I just used Postgres array field to store my embeddings array to keep things simple and free.

  • rileytOP 3 years ago

    It was less than $2 to embed all 100+ episodes with the new OpenAI embeddings and was as easy as just making a bunch of API calls. Pretty hard to beat that experience.

  • rohit89 3 years ago

    Did you use the sentence-transformers model as-is or did you need to fine-tune it for medical data?

    • jonathan-adly 3 years ago

      As is.. all it’s doing is pulling relevant context to the question. GPT-3 is doing all the heavy natural language lifting.

arcturus17 3 years ago

Holy f**ing s**t, this is amazing!

The UX is gorgeous, simple and snappy. I remember a few of Huberman's podcasts so I typed a few questions and the answers were spot on.

I'll be following you and your work, Riley.

  • rileytOP 3 years ago

    Thanks a lot. It's nice when people appreciate the stuff I build for fun.

    The UI is my own design system that I will open source at some point. The app is Remix with a Redis cache to keep things snappy.

doo_daa 3 years ago

This is an amazing piece of work and as others have said, the site and the UI are perfect.

On a side note, Huberman Labs bothers me. I was an avid listener to the early episodes. As I have ADHD, some of his explanations of the brain chemistry involved in attention and motivation were fascinating. But in one of the early-ish episodes he said some completely ridiculous about acupuncture (that it worked) that makes me think he has no real critical thinking skills.

I hope anyone out there listening to him and thinking about applying any of the approaches he talks about just takes the time to see whether any other sources say they have real-world effects.

To the credit of the author, this tool highlights the exact thing I'm talking about. Try searching for...

"How does acupuncture work?"

"Acupuncture involves taking needles and sometimes electricity and or heat as well and stimulating particular locations on the body. Through these maps of stimulation that have been developed over thousands of years, mostly in Asia, acupuncture can reduce inflammation in the body by stimulating the body in particular ways at particular sites on the body, liberating certain cells and molecules that enhance the function of the immune system and potentially can be used to combat different types of infection."

  • Handprint4469 3 years ago

    +1 to this. Huberman has the bad habit of constructing grand narratives that match his beliefs about health and reality, and packaging them as podcast episodes based on science. The only problem is that "based on science" might mean a single low-powered study, which Huberman cites and over-generalizes as if it was rock-solid fact. See this[0] discussion on Huberman's subreddit for more context.

    Another example is his episode with Matthew Walker[1] (author of Why We Sleep), and his other episodes where he gives sleep advice citing Walker as an authority. The problem is that Walker's work is not good[2][3] (riddled with errors at best, fraudulent at worst).

    To be clear: I'm sure a lot (if not the majority) of what Huberman says is correct, or at least matches our current scientific understanding. The problem is that some things are incorrect, and the layman has no way of knowing which is which (especially since all the content is presented with the same high-certainty "science-based tools" tone)

    [0] https://old.reddit.com/r/andrewhuberman/comments/smnnb0/crit...

    [1] https://www.youtube.com/watch?v=gbQFSMayJxk

    [2] https://guzey.com/books/why-we-sleep/

    [3] https://news.ycombinator.com/item?id=21546850

  • nopinsight 3 years ago

    Do you claim that acupuncture, a practice with significant use for thousands of years and supporting scientific studies, is invalid because it goes counter to your belief? [1][2]

    Prof Huberman is an expert and he reads up on relevant studies before talking about something. Even though he may occasionally make mistakes (who doesn't?), your evidence to the contrary, if any, should be at least as strong as his.

    Moreover, accusing that an accomplished scientist lacks critical thinking skills should require substantial evidence.

    [1] https://www.nccih.nih.gov/health/acupuncture-what-you-need-t...

    [2] https://www.hopkinsmedicine.org/health/wellness-and-preventi...

    and there are many other links one can easily find.

    • doo_daa 3 years ago

      You are right and everyone makes mistakes and I'm sure much of the Huberman Lab content is great. I don't claim that acupuncture is invalid because it goes against my belief. I claim that it does not work because there are no credible studies that show that it does. The links you have provided are not to studies, they are to organisations that promote/sell acupuncture. [Edited to correct a typo]

twojacobtwo 3 years ago

This is excellent! I have only recently begun listening to Huberman Lab and my biggest issue has been that I usually don't have an opportunity to write down most of the suggestions while I'm listening.

I've only done a single search with the tool so far, but it immediately returned the details that I was hoping for, along with context and other relevant mentions of the search terms.

Thank you kindly for making and sharing this.

abrichr 3 years ago

> What are the parts of the brain that become de-synchronized in the ADHD brain?

>> The default mode network and the task networks become de-synchronized in the ADHD brain.

> What are the three parts of the brain that become de-synchronized in the ADHD brain?

>> The default mode network, the task networks, and the dopamine circuits.

From https://youtu.be/hFL6qRIJZ_Y?t=1714:

> An area called the dorsolateral prefrontal cortex ... the posterior cingulate cortex, and ... the lateral parietal lobe ... these are three brain areas that normally are synchronized in their activities ... that's how it is in a typical person. In a person with ADHD ... these brain areas are not playing well with each other.

I wonder if part of the problem might be the usage of text to speech. Did you consider scraping the transcriptions instead? e.g. with https://github.com/jdepoix/youtube-transcript-api

  • rileytOP 3 years ago

    It uses Whisper for transcripts, which I believe are better than the YouTube generated ones.

    My guess is that there are more relevant results from the semantic search than I'm including in the context (to reduce costs) and that exact snippet isn't being given to the answering model as context.

    • lemming 3 years ago

      As I wrote here: https://news.ycombinator.com/item?id=34035123, I also wrote a tool to access them. I'm pretty sure there are English transcripts which are manually generated, not just the YouTube generated ones. I've always found them to be high quality, enough to make a book out of.

      • jamesbriggs 3 years ago

        For Huberman Podcast I imagine he pays someone to do the annotations manually, so they're accurate. But on most videos I've found Whisper's annotations to be more accurate than YouTube's default annotations - not to bash YouTube's, they're still great but occasionally you get some weird annotations

lr4444lr 3 years ago

I was just thinking the other day how I'd LOVE to have a way to get a summary of all of the experts' opinions Huberman has had opine on a given supplement. This goes beyond my expectations. Great work!

lemming 3 years ago

This is great. Since I also found the discoverability of podcasts annoying, I wrote a tool to download the Huberman Lab transcripts and convert them to an ebook: https://github.com/cmf/huberman. They still take a while to read though!

abhinavsharma 3 years ago

this is amazing, thank you for building this, i was literally in the process of doing this with the same stack but as a chat bot.

would you be open sourcing soon? totally understand if you want to keep it private but if you are open sourcing there’s a few other podcasts i’m interested in running this on for myself, like some parenting ones.

DeWilde 3 years ago

This is pretty amazing. Is this approach documented or explained anywhere?

I have some ideas of my own that I would love to implement similarly to this and it would help to know how to get started.

  • charcircuit 3 years ago

    I imagine it is something similar to the following.

    Preprocessing

    1. Transcribe the dataset

    2. Chunk the transcription into paragraphs.

    3. Store the embedding of each paragraph into a vector database.

    Querrying

    1. Convert the user's query into an embedding

    2. Query the vector database for the top N closest embeddings and fetch the paragraphs that correspond to them. To be robust against queries which you don't have results for you should limit how far away results can be from the user's query.

    3. Using those paragraphs craft a propmt that you will give to a LLM.

    4. Do any final filtering on the what you got back from the LLM.

  • jamesbriggs 3 years ago

    I built something similar using a variety of YouTube channels focused on NLP, AI, etc. The app is here https://huggingface.co/spaces/jamescalam/ask-youtube - you can ask things like "what is a transformer model?" or "what is semantic search?"

    The way I built it is documented here: https://www.pinecone.io/learn/openai-whisper/

    Afaik it's the same approach as Riley, that is:

    - Scrape audio of YouTube videos

    - Transcribe to text with OpenAI's Whisper

    - Use sentence transformer to create embeddings of text

    - Index embeddings (with transcribed text, timestamps, and video URL attached) in Pinecone's vector database

    - Wrap up the querying functionality in a nice UI

    (this is for the search functionality)

    If wanting to replicate the Q&A part, I also built something similar and wrote about it (https://youtu.be/coaaSxys5so) - it's essentially the same process but we return text snippets to GPT-3 along with the original question and it generates an answer

    • jamesbriggs 3 years ago

      I should add, Riley used the ada embedding model (rather than sentence transformers). Performance wise they should be similar (in ability to encode meaning accurately) but the ada model can encode a much larger chunk of text. I don't know exact numbers but something like 1-2 pages of text in a typical corporate PDF. Whereas sentence transformers are typically limited to around a paragraph of text.

      Typically you'd split the text in paragraph sized chunks to handle this requirement of sentence transformers, with GPT-3 embeddings you naturally have more flexibility there

    • DeWilde 3 years ago

      Thank you :)

solardev 3 years ago

Sorry for the ignorant question, but who is Huberman Lab and why should we care? What drove you to make an AI interface for it?

niemal_dev 3 years ago

This is so well done, thank you for your contribution. An open-source of the whole approach would be greatly appreciated as well!

Ozzie_osman 3 years ago

Big fan of Huberman Labs. Excited to try this out!

layer8 3 years ago

Just a heads up that the styling and the JS doesn't work on Firefox.

  • rileytOP 3 years ago

    Old version? Works on latest for me, but the CSS uses @layer which doesn't have great support with older browsers.

abrichr 3 years ago

Congratulations on launching! Can you please share your OpenAI API costs?

  • rileytOP 3 years ago

    The embeddings cost less than $2 for all 100+ episodes. The cost for the answering calls to davinci are around $30 so far.

yewenjie 3 years ago

Can one generate the answers using text-embedding-ada-002 as well?

  • jamesbriggs 3 years ago

    you can return the chunks of text containing the answers, but not generate answers as that isn't what text-embedding-ada-002 is for. For that you need generation model (davinci in this case)

krashidov 3 years ago

This is a really amazing application of GPT. Did you fine tune a gpt3 model? If so, how did you implement its ability to say “I don’t know?”

  • rileytOP 3 years ago

    It's not fine tuned. You literally just add something like "if the answer to the question isn't in the context, say 'I don't know'" It's wild.

    • krashidov 3 years ago

      So do you have the entire Huberman podcast transcript in the context of the prompt?

    • oidar 3 years ago

      How did you do that?

      • charcircuit 3 years ago

        He just told you? The prompt is a combination of the top 5 search results, the phrase to say it doesn't know if it does not have context, and the question the user is actually asking. That is sent to OpenAI and the response is shown along with the search results as the references.

bilsbie 3 years ago

What do you mean by embedding?

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection