Settings

Theme

LLMs can hide text in other text of the same length

arxiv.org

2 points by goplayoutside 2 months ago · 1 comment

Reader

goplayoutsideOP 2 months ago

https://x.com/rohanpaul_ai/status/1982222641345057263

>The paper shows how an LLM can hide a full message inside another text of equal length.

>It runs in seconds on a laptop with 8B open models.

>First, pass the secret through an LLM and record, for each token, the rank of the actual next token.

>Then prompt the model to write on a chosen topic, and force it to pick tokens at those ranks.

>The result reads normally on that topic and has the same token count as the secret.

>With the same model and prompt, anyone can reverse the steps and recover the exact original.

>These covers look natural to people, but models usually rate them less likely than the originals.

>Quality is best when the model predicts the hidden text well, and worse for unusual domains or weaker models.

>Security comes from the secret prompt and the exact model, and it gives the sender believable deniability.

>One risk is hiding harmful answers inside safe replies for later extraction by a local model.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection