Settings

Theme

Decoding LLM Uncertainties for Better Predictability

uncertainty.demos.watchful.io

16 points by shayanjm 2 years ago · 2 comments

Reader

armcat 2 years ago

Great work! I love the use of simple techniques like normalized entropy and cosine distance to reveal what the model is "thinking". The example with random number generation is very cool! I actually managed to get that example to work by telling it that it's allowed to use replacement, AND by giving it an example of repeated numbers (just telling it that it can repeat the numbers won't work). Then I get perfect uniform distribution on each step (spikes are all same length). I can definitely see how something like this could be used to guide prompt engineering strategies.

shayanjmOP 2 years ago

Building off our last research post, we wanted to figure out ways to quantify "ambiguity" and "uncertainty" in prompts/responses to LLMs. We ended up discovering two useful forms of uncertainty: "Structural" and "Conceptual" uncertainty.

In a nutshell: Conceptual uncertainty is when the model isn't sure what to say, and Structural uncertainty is when the model isn't sure how to say it.

You can play around with this yourself in the demo!

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection