One-Time Pad reinvented to make electronic copying impossible
technologyreview.comWell, in practice, public/private key encryption with large keys does achieve the same security, as does meeting in person (as required here) and simply exchanging a 100gb flash with a random OTP.
What this method does is makes it harder to attack if the Bob's key/OTP is physically stolen. ALthough keys embedded on secure chip-cards cheaply and commonly available now are just as secure - the authors apparently claim that the keys can be read off the glass blocks in less than 24 hours; and extracting a long key off a chip-card (even if it's just pin-protected with permanent blocking for x mistakes) would be a bigger pain, you have to physicially scan the chip with extremely expensive gear for that.
What this method doesn't do, is "make electronic copying impossible" or facilitate DRM, since Bob or anyone who compromises Bob's computers can freely copy and distribute the data right after the decryption.
This is not a one-time pad. It uses a key that contains less information than the message. It is still possible to brute-force the key if you have enough data.
It is a great way to make the key hard to copy, which is how encryption is usually circumvented in practice.
I'm not sure you right on this. The way I'm reading it they generate the keys by sending the same random signals through each set of glass. If they send N signals through the glass and N is >= the message length then it certainly is a one-time pad. The two keys end up being as long as (or likely longer than) the message, so you can't brute force it.
You're assuming that the structure of the glass can't be 'compressed' with assumptions about the structure of the glass.... At least the key space could be reduced well below the naïve entropy estimate.
You misread it - they are not using any short keys.
I feel like I'm missing something. Isn't this still vulnerable to "electronic" intrusion, since the light-slab-cipher has to presumably be read into some type of memory somewhere, somehow?
Yes, they assume that the sending/receiving machines arent themselves compromised. That case, however, doesnt have a solution as of yet (that I know of). The point is that any machine intercepting the transmission of the message has no way of possibly decoding it period. It absolutely requires the key to be decoded.
OT: If two people have to send a signal to each other, they should be named Alice and Bob, this goes back to Einstein's paper on the EPR paradox[1], and is based presumably on the fact that Alice starts with A and Bob starts with B.
Sometimes I read articles where Alice sends a signal to Charlie or something like that, and for me that is a huge red flag that indicates the author may not have payed much attention in school and/or may not have a great grasp on what they are talking about.
Fortunately this article is about Alice and Bob, so I am inclined to take it seriously.
If only our scientific journals kept such rigorous academic standards.
And then there's Eve, the evil one, the eavesdropper, the susceptible one in Genesis.
Can someone explain this to me? I read the article but it didn't make sense.
The physical object as I understand it provides a source of random data, with the property that it is fast to do a single lookup (shine an arbitrary light pattern on it), but slow to copy the whole data.
This means that an attacker who steals the object, but doesn't know which lookups will be done, won't be able to copy all the data.
Now my objection is that the only lookups that the person who steals the object will not be able to do, is the ones based on random patterns that have not been published, i.e. the ones that are generated next time Alice and Bob meet.
But from that point of view, it would be no less safe if Alice and Bob simply generated a one time pad and stored it when they met.
The original paper [1] is much clearer about the actual algorithm. The below is pretty much pulled straight from it, although I simplify it slightly (by assuming one pattern, etc.)
First, Alice and Bob decide on a pattern P. Then they each compute a key K(A) and K(B) using the pattern P by shining P through their slab. Then, they publicly publish P and K(A) ⊕ K(B). (Apologies for the lack of good mathematical notation, but HN is not a great medium for such.)
Because P, K(A), and K(B) are all random, the attacker learns no useful information from these two published items.
Now, for Alice to encrypt a message m, she computes K(A) ⊕ m and sends it to Bob. Once Bob gets it, he uses the public pattern P to recreate K(B). Then he uses the publicly published K(A) ⊕ K(B) to compute:
K(B) ⊕ [K(A) ⊕ K(B)] ⊕ (K(A) ⊕ m)
which is in fact m. At any time, the Eve can only know P (useless since she doesn't have either slab) and K(A) ⊕ K(B), which is not enough to recover the key or message.
Thanks, that's what I thought but I'm glad you confirmed it and put it in their notation.
So here is my issue:
Eve knows P, so she can steal the object and hence compute K(B).
The difficulty in "copying" the object is only significant when Eve does not know P.
But that same difficulty would apply if K(A) and K(B) were random numbers that Alice and Bob generated when they met.
Or is the point that Alice and Bob generate K(A) + K(B) for a lot of values of P, and then randomly select the ones to use when sending a message? That doesn't seem to be what the authors intended, although it would now be secure against stealing the object for a short amount of time.
Interesting point. I'm not sure what prevents Eve from simply computing K(B) from P if she has stolen Bob's slab. Maybe I have misinterpreted something?
The actual paper does describe generating n different patterns and then randomly selecting one of them to encrypt the message, but the index of the one that is randomly selected is sent with the encrypted message so that Bob can use it to look up the appropriate pattern. I took this as just generating more than one key for convenience's sake.
Notably, a requirement for security of the slab is that given temporary access, Eve "must not be able to efficiently copy or model its contents." So, I think the point is that since there are many different P's (and hence K(A)'s and K(B)'s), Eve cannot recreate K(B) for all P's in a reasonable amount of time. Further, she can't actually make a physical copy of the device. Still, it seems that if Eve can steal the device, she can break old messages --- which I guess is, as you said, a property shared by regular OTPs. When she steals the device, though, she can only decrypt so many messages before detection, since there is apparently a key recovery rate of 1.5 seconds per key.
But one of the other requirements set forth in the paper is that if the slab is stolen, Eve must not be able to send or receive messages. I'm not sure how that is fulfilled here.
>But one of the other requirements set forth in the paper is that if the slab is stolen, Eve must not be able to send or receive messages. I'm not sure how that is fulfilled here.
If a "full" message involved both sides randomly picking a P, then this could still be satisfied.
But it doesn't seem to live up to my hopes for security: all it really guarantees is that the probability of Eve decrypting an intercepted message (assuming she steals the slab for time t and knows all the P's) is t/T where T is the time Bob and Alice spend generating K(B) + K(A) for different P's.
When I woke up this morning (and was in a better state of mind), I realized the supplements may have been on the arXiv page but not included in the paper. I was correct. Here [1] is the supplement paper, which I find more useful than the actual paper itself.
It appears I was correct about the stolen CPUF leading to decryption of previous messages; in supplement G, at the bottom of (2), the authors state:
"Finally, it is worth noting that with a stolen device and access to the public dictionary, an attacker Eve may be able to quickly decrypt any of Alice and Bob’s previous communication that she may have saved(since Alice and Bob publically share which SLM patterns they use each round). For this reason, it is highly beneficial for Alice and Bob to utilize a second layer of encryption to ensure that any eavesdropper cannot determine these previously shared patterns, as discussed next."
They also discuss other security properties of the scheme in supplements G and H, which are both excellent.
[1]: http://arxiv.org/src/1305.3886v1/anc/CPUF_Supplementary_Mate...
I understood that the lookups are standard, i.e. anyone can do them. The problem is that shining light through glass creates heat which can change the structure of the glass if you do it too often/quickly, hence the time limiting factor. Naturally, this would constrain the intended recipient as well. It is the outcome of these lookups that form the secret key.
What's involved in the handling of this glass? They claim security because heating the glass damages its structure. A fingerprint or scratch would also damage it. Is it secure because it's just so fragile that it is plain unworkable in real world conditions?
Also this article doesn't touch on whether the randomness generated is sufficiently random for cryptographic purposes. With enough error correction and an improper random distribution, it can become possible to break even OTP.
I'm not sure about the handling of the glass. The paper [1] gives all of the specific materials used in the experiment, but it doesn't say anything about handling them and I don't know enough about the area to be able to intelligently comment...
The paper also mentions that the randomness they are using passes the Diehard and NIST randomness tests, so it appears they are pretty random. Apparently, though, they have to sacrifice some of the bits in order to ensure that! It references "Supplements" E and F, saying they include more details, but they're not included in the arXiv paper. Of course, the original random patterns used to generate the keys are randomly selected too, so let's hope they're using a good random source for that...
As far as I can tell, it's just a reasonably large on-demand (until it overheats) source of randomness. All the other OTP principles hold, the glass-generated random bits are the only new things. And you can still "electronically copy" it - viruses are great at executing code, such as dumping all the random data such a thing can generate (they even say it would only take 24 hours, and less to upload).
I suspect it's not even very good randomness. Long dark / bright lines from deeper scratches at either end, skewed randomness due to the atomic structure of glass (though glass is probably a reasonably good natural source, since it's not crystalline), that sort of thing happens when you're dealing with physical (hard, static) structures. And this thing has to be very reproducible or you can't decrypt your message, so we're not talking extracting randomness from a warm cup of tea.
There's been a lot of "physically unclonable function" work, which IMO is a lot more useful for DRM and anti-counterfeiting than for encryption.
The big weakness with OTPs is inadvertent key reuse, not someone stealing the key wholesale.
Even with quantum computing, conventional OTPs are fine, provided keys aren't reused (or lost). OTOH there are plenty of other more efficient symmetric and asymmetric systems which are still fine under quantum computing, too. (it's really not much of an issue for conventional symmetric cryptography; yes, most common public key systems in use today fall to quantum cryptography, but their are known/viable systems which would be fine)
I don't understand, where is the beef?
How is this qualitatively different than a plain old symmetric encryption. Bob and Alice (or whatever and for whatever reason renamed characters in their example) each have the key, Alice encrypts ( ⊕ ) sends to Bob, he decrypts.
Is it the idea that the key cannot be digitally copied and instead it is two pieces of glass? How is it different than Alice and Bob using a one-time key for each transmission then destroying it (using secure wipe)? What if they use hand written papers to keep their keys and burn the pages when done.
I am probably being dense missing some deeper idea. Anyone care to explain?
The point is physical security and information density.
If you use an electronic one-time key and then secure wipe, you're not protected from someone who unbeknownst to you already read the contents of your hard drive.
If you use handwritten papers then you cannot store large amounts of data, nor is your data transmission rate anything feasible.
This allows an uncopyable object the size of a grain of sand to hold about a terabit of private shared data, and to process that at the speed that we can send/receive/react to light. (Considering that the basic technology is also used to send signals down fiber-optics cables, speed should be pretty good.)
>That looks to be a significant improvement over any kind of cryptography that stores keys electronically and is therefore vulnerable to an electronic attack that can copy digital information perfectly.
Hmmm... so is paper?
Well, you cant send paper privately in milliseconds across the country. But yes, the meeting to create the key limits the possible receivers as well as a connivance hassle.
Can someone help me understand something - Manufacturing the "glass" in order to create at least 2 identical copies will require some blueprint. I would assume that this blueprint would be stored in some database or communicated to various parties in some fashion. Couldn't hackers just get their hands on this somehow in order to reproduce the glass without having to actually physically steal the glass?
The two slabs are actually different. See my explanation of the algorithm downthread: https://news.ycombinator.com/item?id=5741793
Ahh ok - great explanation btw.
I'm still curious how this glass is produced. It seems to me that you would be introducing many more potential intrusion points by using physical encryptors since many more parties (people, databases, and systems) will be aware of the details in order to produce the appropriate slab. Having said that, the same thing would apply for any other physical encryptor that would have to be manufactured.
I'm also interested in how they introduce quality entropy into the glass. It's well-researched algorithmically, but are material researches up to it? Even subtle patterns could lead to within brute-force distance.
When I read articles like this, I imagine scenarios involving some United Earth Government sending vessels deep into space with one time pads which would protect us from evesdropping from advanced hostile civilizations with vast computing/cracking power. I guess it would kind of suck if your glass got broke and then you couldn't decode any more messages from Earth.
This is a very interesting way of solving the key distribution problem. It forces attackers to capture both the digital part of the key and the physical.
This doesn't solve the key distribution problem at all:
To start off, both Alice and Bob must have their own slabs of diffusing glass and must physically meet to create a key for encoding a message later.
Like any other OTP system, key distribution — ensuring that the parties who wish to communicate securely have a shared secret (or secrets in the case of OTP; the whole point of OTP is that a key is only used once) — is the weak point. Unlike many other OTP systems, this one requires a physical meeting, which, depending on your use-case for needing crypto in the first place, may be impossible, or deadly dangerous.
At best, this complicates key compromise.
EDIT: clarifying language.
Correct, I can't see the practical use of this.
If Alice and Bob must have a secure method to jointly compute K(A) ⊕ K(B), then why wouldn't then use that same method to just exchange the data?
At the time of their key-exchange meeting, they don't know what messages they will want to exchange later.