Four RCE Zero-Day Flaws Plague Internet Explorer
securityweek.com"Microsoft was initially given a May 12, 2015 deadline, but this deadline was extended to July 19 at the vendor’s request. Since the company failed to meet this deadline, ZDI has decided to inform users of the existence of this flaw."
I would expect Microsoft to handle security vulnerabilities with a higher priority. Not sure why they are dropping this on the floor.
At some point I prototyped a tool that used Ron Rivest's timelock puzzles (repeated squaring modulo the product of two large safe primes takes a long time and isn't parallelizable, but is quick to compute if you can factor the modulus) to encrypt compressed tarballs of zero-day disclosures.
The idea would be that if you found a vulnerability in a product whose vendor was likely to pour more money into gag orders and legal threats than into fixing the vulnerability, you would publish the vulnerability encrypted in such a way that it would take several years of continuous computation to get the decryption key. Legal threats and/or general foot dragging couldn't put the cat back in the bag.
Sometimes I regret not publishing the tool.
Do you still have access to the source? Sounds a really interesting tool even if just partly completed.
Here is an implementation of the same thing, but tailored toward encrypting Bitcoin private keys. Generalizing shouldn't be too difficult if you're interested: https://github.com/petertodd/timelock
Also a good article by Gwern about the same topic: http://www.gwern.net/Self-decrypting%20files
No, the implementation you link to uses parallel-serial hash chains. Using that implementation, the person creating the file has to do as much work as the person reading the file, it's just that the creation can be parallelized. Use the linked to code, if you'd need to dedicate 1.5 months on a 16-core machine to generate a file that takes 2 years to read.
I'm suggesting something based on quadratic residues moduo a Blum integer instead of repeated hashing. This allows a shortcut if you know how to factor the Blum integer. It takes a few minutes on a single core to create a file that would take 2 years to read. Most of the file creation time is spent checking large random numbers for primality.
There are tradeoffs between the two different ways of creating timelock puzzles, but for the time being, I'm willing to assume anyone who can factor an 8192-bit number in under 2 years has better things to do with their cracker than decrypt my 0-day.
I suppose it would be nice to have a hybrid system where you could have stronger guarantees that someone capable of factoring 8102-bit integers would still need (for instance) at least 6 months of computing hash chains after they finished solving the quadratic residue portion of the problem, and someone who couldn't factor 8192-bit integers would spend (for instance) 18 months computing quadratic residues, followed by those 6 months of hash chaining. This would need 11.25 days of compute time on a 16-core machine to set up the hash chains, but at least it's not months of setup time.
I should point out that I would suggest using the solution to the quadratic residue problem as a key for a Blum Blum Shub stream cipher to encrypt the hash chains and the message, so that portion of the system only relies on square roots and quadratic residues modulo a Blum integer. (Using the quadratic residue solution as a key for AES-GCM or ChaCha20-Poly1035 would open you up to weaknesses in those ciphers, and in this case the slowness of Blum Blum Shub isn't a problem.)
Then, for the inner puzzle, I would use hash chains using a hash in the Blake family to generate a ChaCha20-Poly1305 key to encrypt the plaintext. Since Blake's round function is based on ChaCha20, this also reduces the number of different primitives you're relying on.
In the end, the cipher text would be doubly encrypted with Blum Blum Shub and ChaCha20-Poly1305, with keys being the solutions to repeated quadratic residue modulo a Blum integer and repeated hash chains using a hash from the Blake family. This minimizes the number of possible points of failure.
I'm sure there's a lot that goes into fixing these, but Adobe surprised many in the security community with their fast and responsible reaction to the zero-day flaws unveiled by the Hacking Team leaks.
Microsoft has the resources to fix these; I'm not sure what their excuse is (and it may be valid), but vulnerabilities like this should take highest priority.
> Microsoft has the resources to fix these;
I agree, and have to assume the time to fix is back testing and checking with big vendors/users if the fix inadvertently breaks something they were relying on. At this point how many windows bugs are now features set in stone and must be carried on in perpetuity because so much software has been built around the buggy behavior?
Lots, but RCE is never a feature that will be set in stone.
I'm not a 'security researcher', and have only a technical layman's grasp of the issue, but:
> "By manipulating a document's elements an attacker can force a dangling pointer to be reused after it has been freed. An attacker can leverage this vulnerability to execute code under the context of the current process,”
The first and second sentence there feels like an 'and then a miracle happens' argument (http://star.psy.ohio-state.edu/coglab/Miracle.html). I get that, in some cases dangling pointers might allow you to get a bit of uploaded data to be treated like a bit of internal data. But it seems to me like a piece of extraordinary unlikely bad luck to allow this to execute arbitrary code.
So I don't dismiss that there is a theoretical risk, but can anyone suggest how much risk is in these risks. In particular, is the risk of such an exploit greater than the risk of an exploiter finding a new weakness? If not, then I can understand why there is no great urgency to patch these flaws.
This is standard description of a UAF (use-after-free) flaw. Reliability can vary from "30% of the time, it works every time" to perfectly reliable. It just depends on the instance of the bug. It is one of the most common exploitable bug classes found via fuzzing.
Because so many browser fuzzing crashes are UAFs, people have put a lot of effort into developing reliable techniques for exploiting them.
See e.g: http://www.rapid7.com/db/modules/exploit/windows/browser/ms1... for a reasonably reliable example.
Thank you, exactly what I needed to understand.
Google's Project Zero team has written some good blog posts on this topic if you want to read more:
http://googleprojectzero.blogspot.com/2015/06/what-is-good-m...
http://googleprojectzero.blogspot.com/2015/06/dude-wheres-my...
"but can anyone suggest how much risk is in these risks."
Twenty years ago maybe this argument carried the day. Don't even consider using it today. The tooling, techniques, and skill are far higher than you could dream if you are not in this world.
This is not quite the same thing we are talking about, but let me give you a different example. An obscure cross-site-scripting attack is no big deal, right? Well, courtesy of BeEF [1], if the XSS can be leveraged to get you to download a script, which is a low bar, BeEF can then be used to proxy web access in, allowing an attacker to lever up from "small XSS" to "crawling your intranet with the internal credentials of the compromised user".
Yow!
Do not ever count on difficulty of exploit as a defense anymore. In many cases the reason why these people aren't providing off-the-shelf exploits for this sort of thing isn't that it's too difficult to make practical, it is that in the security world it is now too trivial to be worth spelling out. Attacker capabilities (and pen testing capabilities) have skyrocketed in the past ten years, but the defense team still for the most part is operating like it's 1995 and the idea that a program might be used on a network is still like some sort of major revelation.
(I'm on the defense side personally. It feels about like this: https://youtu.be/MPt7Kbj2YNM?t=2m11s In theory, I am powerful, in theory I control the field, in theory all the advantages should be mine, but....)
I wasn't considering anything, let alone 'making an argument'. Anyone who listens to non-specialists like me to determine security strategy is asking for trouble.
I also wasn't making any comment about 'banking on difficulty of exploit', what I was asking for was relative risk. I think that all code is exploitable. The question I had was, is the exploitation of a particular UAF bug sufficiently easy that it outweighs the base risk of a new exploit being found. If I have finite resources, understanding where to apply them to improve risk is important.
The other responses have answered my question in some detail.
I'm sorry, my tone was not intended as "what are you even talking about!?!". My tone is intended to convey that security penetration skills have become scarily good and "is it theoretical?" is almost no longer a question worth asking, because the skills, techniques, and tools to take what superficially seems to be a hairline crack into full-blown network ownership are unbelievably well developed.
As I said, I am on the defense side myself, and I will freely admit I can get a bit tetchy when doubt of the viability of a vulnerability occurs; I frequently find myself in the position of being a "team captain" trying to explain that, no, seriously y'all, the other team is coming to play, they've been working out, they take illegal steroids, they practice six days a week, they don't play by the rules and they're coming for our scalps and you're on your third beer telling each other how easy this is going to be... it's not exactly game-winning prep you're doing here....
This article is a perfect example of this: http://googleprojectzero.blogspot.com/2014/08/the-poisoned-n...
The author was able to take an off by one error which allowed writing a single null byte all the way to full code execution. These guys are unbelievably good at what they do, and as you state, you can pretty much assume that any vulnerability is exploitable with sufficient effort and skill.
Usually you set these up by aggressively spraying the heap by allocating objects so that copies of the code you're trying to execute are thousands or more of places that are also aligned to the page boundary. The heap doesn't do things that are that random (depending on how new the browser is, obviously) so the intention is to make it so your free'd object is replaced with an evil one that has a malicious virtual memory table and will jump somewhere in that heap (to a specific address which will work if you sprayed the heap correctly, and sometimes to one which is calculated with ASLR.)
If you get to an attacker controlled website, it shouldn't be that hard to pull off most of the time, though definitely not deterministic.
(Man, remind me to check that this isn't all horribly wrong after defcon...)
There is an entire industry of exploit frameworks waiting for this sort of thing to be slotted in and then deployed on compromised ad network servers to download CryptoLocker or similar for-profit malware. Things can go bad very quickly.
So, does this affect Windows 10 and the new Edge web browser?
Since IE11 comes with Windows 10 - yes.
I thought Edge was a rewrite that threw out most of the code in old IE. Edge is not IE11.
They can't just get rid of IE11 as some businesses still use it.
RCE stands for Remote Code Execution
WTF? Microsoft must have known what would happen. This isn't 1999 anymore. Did they just call HP's bluff? I was under the impression that MS was generally doing a fairly good job as far as taking these reports seriously.
They didn't drop anything. You cannot reproduce the vulnerabilities from the details they published.
How embarassing. I think it's hubris at this point that keeps Internet Explorer alive. I think it's been obvious for years that Microsoft just doesn't have the engineering talent to make a decent browser. It's time they bow out of that particular arena and focus on areas where they are strong.
I find it hard to believe this is a lack of engineering talent. MS has some extremely talented people and has pushed out some very neat security stuff well before, say, Apple has. I cannot believe MS has people sitting around saying "well darn we just don't know how to fix this bug for 6 months now". There's gotta be more to the story... I hope.
What's the alternative explanation?
IE has been a complete debacle since its inception.
Bad management? Thinking HP wouldn't release and putting people on other tasks in prep for Win 10? Not believing they were critical? Some messed up test or compatibility interaction that ended up slipping the release? Anything else interesting? How long do you think these will go unpatched? If they patch them in a week, will you change your opinion to "wow MS has talent but made a mistake"?
Coming to the conclusion that the world's largest software vendor that ships a rather security enhanced OS (how long did it take OSX to add ASLR), shipped plenty of memory analysis/protection features in their compiler, etc. simply lacks the talent to fix a few bugs... Someone screwed up but it's unlikely to be a technical talent issue. You need to update your priors.
PS: IE kicked ass at the beginning. I was rather excited with it around IE3. And let's not forget they invented XHR. It only totally went south once MS's management thought they'd won and disassembled the team. And hey, MS went from the leader in instant messaging (which could easily have been turned into the dominating social network) to buying Skype and dropping the MSN brand. They know how to drop the ball from a management perspective.
MSN Messenger was playing catchup to ICQ, they only became a leader by process of elimination when ICQ turned into adware and MSN was the only non-awful contender.
So it became successful because it was the best option? I don't think that deserves the dismissive "only became successful because..."
The best deep-fried snack is still unhealthy. Being the best in a field of crap isn't much of an accolade.