Settings

Theme

Apple puts "do not hallucinate" into prompts, it works

twitter.com

4 points by djhope99 a year ago · 5 comments

Reader

ggm a year ago

I'd like a rational explanation of how the LLM interprets "don't hallucinate" -Is it perhaps "translated" internally to the functional equivalent of a higher confidence check on output?

Otherwise, I think it's baloney. I know there is not a simple linear mapping from plain english to the ML, but the typed word clearly is capable of being parsed and processed, its the "somehow" I'd like to understand better. What would this do the interpretation of paths through weights.

Pretty much 'citation needed'

TillE a year ago

Everything about prompt engineering is just the voodoo chicken.

https://wiki.c2.com/?VoodooChickenCoding

  • qbxk a year ago

    what's interesting about the anecdotes on that link though, is that once the confusion settles, there is an explanation. the chicken may have made no sense, but the problem did get solved, and the chicken was necessary, but not for the reasons you thought.

    maybe prompt engineering will make sense someday, maybe we need artificial general psychology first?

unlisted7347 a year ago

Interestingly, negative prompts for stable diffusion (like "deformed hands") has similar effect. How LLM decides what's hallucinations? Mayhaps, it double checks itself? But probably it became self-aware.

sva_ a year ago

X doubt

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection