Steganography: the sleeping giant

4 min read Original article ↗

This is a post about hiding things in other things, and why it may lead lawmakers to your devices.

First, an announcement: Tejas Narechania and I submitted a response to the FTC’s call about cloud providers’ business practices. We discuss competition at the internet’s core, focusing on content distribution networks. If you’re interested in competition and consumer protection, check it out on SSRN.

image of a blue arm and hand over white, pixelated dots
Take a spectrogram of the Nine Inch Nails song “My Violent Heart,” and you’ll find a hidden image.

Imagine you’re at a protest. Someone airdrops a meme to you. Thousands of people might have gotten this same meme. But this one has a special message for you. When you get home, you use a tool like steghide to recover a secret message hidden inside the image: “Meet at Oscar Grant Plaza at 12.”

This is the premise of steganography, the art of hiding messages in other messages. It’s an old technique (Herodotus mentioned a few examples in Histories). Where cryptography hides the content of communication, steganography hides the fact that secret communication is occurring.

But steganography has long been plagued with a fundamental problem: it’s easy enough to detect. Back to the airdrop example. Say you’re a spy. You get the meme, along with several hundred others. Can you do some analysis on the memes you got to figure out which one, if any, has a hidden message?

Across the board, and throughout history, the answer has been yes. Steganography is broadly susceptible to steganalysis: an automated process can generally reveal whether any piece of data in a set contains hidden messages.

Until now. A recent (and underappreciated) result from de Witt et al discovered a mechanism for perfectly secure steganography, a technique that resists steganalysis. If the protestors had used this technique, that spy, in the example above, would be unable to discern whether any meme they collected contained a hidden message or not.

The magic of modern encryption is that it allows private communication over insecure channels. If you’re messaging friends on Signals, eavesdroppers can collect all the encrypted messages they want; they’ll have no way to know the content of your messages. But it will be apparent that encrypted communications are occurring. Even if an eavesdropper can’t decode the messages, they can block them. Or identify, harass, or imprison the people sending them.

But barriers to encrypted communications are not the sole domain of illiberal societies. If passed as written, the UK’s Online Safety Bill would effectively disallow end-to-end encryption within the country.1

Perfectly secure steganography pokes a serious hole in those plans. If perfectly secure steganography like de Witt et al.’s method is possible, scalable, and robust in realistic contexts of use, then it would be perfectly impossible to robustly detect the use of encryption, as users can continue to share innocuous seeming messages that happen to contain encrypted ones within them.

Perfectly secure steganography makes a whole class of automated censorship quite difficult. The Chinese model of deep packet inspection is capable of censoring images in real-time (for example, putting a bar over a hat that says “1989”). But it would be incapable of censoring an image with a steganographic message inside of it.

Where does this leave policymakers looking to control information flows?

Should there be limits on the use of end-to-end encryption? I’ll let others opine. Instead, I’ll ask: for those policymakers who are looking to limit end-to-end encryption, and do so effectively, what are they most likely to do?

Encryption backdoors make sense to such policymakers because there are only a few intermediaries you need to pressure (think: the two App Stores), and being able to decrypt anything is convenient during investigations. What happens when encryption backdoors no longer help, because any unencrypted data you intercept could itself contain hidden messages, and you’d have no way of knowing it did?

In that world, regulators looking to watch content or chill speech would, I think, look to control endpoints instead. With endpoint control, even a decrypted message can be viewed by the regulator, as users would use their device to read the message. With endpoints, again, there are a limited number of intermediaries you’d need to pressure (think: phone manufacturers who need a communications license in your jurisdiction).2

“State control over endpoints” is a theme these days. A drama with many characters. Concerns about “friendshoring” supply chains, anxieties about who produces which chips—both acknowledge that states want a more active role in guiding what goes on in their slice of the internet. And, if encryption is provably good enough that controlling the wire doesn’t give you all the properties you want, regulators will come for the endpoint, eventually. If you look closely, they already are.

This image is a puzzle. Can you solve it?

Discussion about this post

Ready for more?