IPv6 Attack Kills Mac OS X and makes Windows Server 2012 restart in Seconds
samsclass.infoDisclosure to Apple - Apple notified 12-11-12.
I often wonder why disclosures of these types of exploits is now, "same day" instead of "Let vendor know you will be reporting this to public in a week."
I wonder if it is out of concern they will be pressured to keep quiet?
There is a good practical reason for not providing advance disclosure at major conference, particularly if you're subject to some kind of NDA, because, more often then not, the security researcher faces the risk of legal action and being shut down.
That pattern, though, "We are going to announce a security hole in major vendor product" followed by, "Shut down by legal action" - happens so frequently that I often wonder whether that's actually part of some larger pattern of entrepreneurial behavior that's opaque to me, it happens so frequently. Maybe it enhances your reputation? Gets you in the news?
I'm all for full disclosure, but, it might be nice to give the vendor a week to have a patch that can roll out at the same time as you let the world know what you found.
When I discovered a vulnerability in Mac OS X that would allow a unprivileged user to keylog every user on the system (CVE-2007-0724), I let Apple know, then kept quiet until they fixed the issue. It took them 11 and a half months to fix. They thanked me in the security update note, and I now how a CVE on my resume. Was silence the most morally correct action? To this day, I am still unsure.
I've never thought to put CVE-IDs I'm credited for reporting on my resume. Is that...a thing? Do tech employers (outside of security consultancies) even know what a CVE-ID is?
What else should a person put on their resume (beyond job experience) when applying for security roles? Patents? Education? Open Source Projects? I would think that CVE-IDs would certainly lend color, and probably credibility to the resume of someone applying for a security position, particularly if the CVE-ID (which has some amount of peer review) was associated with something interesting or relevant to the position being applied for.
It helped land me a firmware development position at what was then a fortune 500 company.
I'm at Basho (we make the Riak database) in a non-security role, and I pay attention to CVE-IDs. They'll stick with me and make me remember a resume better, and improve somebody's chances of advancing to an interview, and give me something to ask about during an interview.
If your resume has them, the others don't, and the boss is knowledgeable or curious ... then it could help you stand out. I would think it would look very good for a developer position, especially at a place that makes high reliability and/or network facing products.
Unless there were an easy workaround which you could disclose only with disclosing the rest of the problem - yes, it was.
This assumes something that I don't believe is defendable: that bad people wanting to install keyloggers on these systems did not already have knowledge of this vulnerability (or, even simpler, that one would seriously believe that they would be unable to find this vulnerability without splicer having told them about it, as somehow he had unique knowledge of the system). Just because I don't have a way to protect myself from harm does not imply that I am somehow better off not knowing that people can harm me.
It does not matter - 99.99% of the users have no means or expertise to detect or disable a keylogger if somebody installed it. Unless there is a tool that would allow them do do it, disclosing the vulnerability is useless for them. On the other hand, if such tool exists, it can probably be published without full disclosure. You are not better of not knowing, you are the same (unless you are in highly qualified 0.01%), however various criminals - who do not have time/expertise to find vulnerability themselves, but can exploit known ones - immediately gain edge over you once it is published. For example, I myself, without knowing existing vulnerabilities, probably could not in reasonable time find one, but given a good disclosure of one, I probably could, using existing tools and with some luck, produce a working exploit for many of existing types of holes.
So the problem is that irresponsible disclosure does not help victims at all, and does help criminals. The only positive thing in irresponsible disclosure is that if vendor is unreasonably slow with issuing patches, and exploits are already known to be in the wild, then the harm is minimal, and disclosure can raise the priority of the fix. But absent this knowledge, responsible disclosure is almost always better for the users.
There is an active debate on whether immediate full disclosure is the right or the wrong response. In general until there is public disclosure, vendors do not feel motivated to fix problems. Unless you release details, people cannot verify that they are vulnerable. And if an exploit is already circulating among "the bad guys", then you're not doing that much damage by disclosing.
In this case it looks like someone is publicly disclosing a vulnerability that is already in circulation, and presumably is in use somewhere. A vulnerability which might have the potential for remote code exploits against multiple operating systems, and there is no guarantee that someone hasn't figured that out and is using it right now. For someone squarely on the full disclosure side of the debate, this would be about the best case to fully disclose everything, immediately.
That depends on the vendor. Some vendors are slow, some vendors are fast. It is wrong to say that no vendor even fixes bugs unless they are publicly disclosed, it is not what responsible disclosure means.
When I say "in general" that means not so for every vendor.
That said, Apple's track record on this topic is not exactly stellar.
Apple rolls out security updates infrequently, but it seems that every time they do, I see fixes for issues I'd never heard of before. Now, I don't exactly seek out vulnerability reports, but they certainly seem to be fixing things that didn't get high-profile articles on social news sites.
This is true.
But people who submit security issues to them say that their turnaround time tends to be very long. Which is bad for their customers if the submitted security issue is being exploited in the wild.
I think vendors should have a policy for dealing with security vulnerabilities. The policy should say how much time they will take to fix it and how they will give credit those who found the issue.
If a vendor does not have such a policy or is found to have violated it, I would go for immediate full disclosure.
http://blogs.computerworld.com/mac_os_x_java_fiasco_apple_st...
http://www.the4cast.com/apple/apples-flashback-fiasco-what-r...
2 years apart, same outcome. Apple has a terrible track record of fixing bugs. As of so far it seems giving them a minute, week, or month has no difference. In the past other vendors have had issues with timely fixing their bugs or trying to squash disclosers, that's why lists like full disclosure exist to this day.
We should find a better example - both of yours were Java Vulnerabilities. I totally agree that Apple needs to fix these if they are going to Ship Java with their product, but, their ability to quickly roll out a Java patch is somewhat less than their ability to quickly tweak the ICMP6 handling stack of their OS.
I take it 12/11/12 is a December date, not the November one that it is by convention here... I missed that and assumed a month had been given, as screen grabs show November dates.
Yeah - I deal with the UK often enough I had to double check the timeline to be sure:
o First posted: 7:30 am 11-20-12 by Sam Bowne o Page reorganized with Contents section 11-30-12 10:36 am o New videos 4 and 5 added 12-5-12 o BayThreat Videos added 12-8-12 11:19 pm o Attacks on the Mac OS X with simulated routers added 7:45 pm, 12-10-12. o Apple notified 12-11-12I dream (awake!) of The World seeing that date, thinking something along the lines of "but hey, that's annoying, I'm not sure which date that refers to!" and then just adopting ISO 8601 immediately.
Sometimes I'm really baffled by the dominance of American date formats instead of clear, natural and sortable ISO 8601.
YYYY-MM-DD HH:MM:SS.ffffff+ZZZZ
I used to think the same way until I moved to the UK and people expressed the opinion that YYYY-MM-DD is still an American format.
I guess because for them the day always comes before month.
Nevertheless, ISO 8601 is clearly the ideal format for its sortability and consistency, cultural imperialism be damned!
For anything handwritten (cheques, dates on signatures etc.), I use YYYY-Mmm-DD (e.g. 2012-Jan-10), as I can't imagine anyone confusing the meaning of it. I avoid DD-MM-YYYY and MM-DD-YYYY, as I usually end up having to check what I meant. No one has commented on it yet.
Typed, I use YYYY-MM-DD (e.g. 2012-01-10), mainly for its sortability.
Well, in Poland we do use D.[M]M.[YY]YY too, which is unfortunately quite popular, as a short version of the format with verbal month "D MMMMM YYYY r." (r. stands for "rok[u]", i.e. year). This long version is predominantly used in lots of official forms and letters here.
Heh. At a previous job I advocated that format so much that people thought it was a Canadian format.
They were quite surprised when I told them the actual format that was used in Canada.
I'm not sure why the page states Apple was notified 12/11/12 but this attack has been known (by me and sec folks) a little while now, and I found out about it from an Apple engineer that works in this area. So they've been aware of it a while. It also affects iOS.
Maybe Sam Bowne (the author) didn't formally notify Apple until he had more solid details, but he certainly made the issue publicly-known before this. It wasn't a surprise attack on anyone.
I'm not defending the disclosure procedures but I think the author is under the impression that Apple is not going to care/respond and therefore not worth waiting X days before announcing publicly:
"The new version of the attack is powerful enough that I decided to formally notify Apple. I don't expect them to care much--Microsoft certainly didn't think this was important to them, and Windows is much more vulnerable."
Its also worth noting that while this vuln has a high availability impact it is also requires very specific network access, ie you can't run this from your cable modem and kill a random box on the internet.
To send router advertisement packets to a remote network (obviously spoofing the return address) shouldn't be very hard, but I don't know if firewalls or routers in between will refuse to forward the packet.
Anyone want to perform a test with me?
Since this is a neighbour discovery mechanism, RAs/RSs are mandated to be link-local (either LL multicast when non solicited or LL unicast in reply to sollicitation), therefore the scope will kill routing. A router passing around such crafted RA/RS or a node not dropping such crafted RAs would be non-compliant and would just break the neighbour discovery mechanism anyway even without any form of attack as it does not make any sense.
Therefore the attack is bounded to the neighbour router(s).
From the RFC [0]
[0]: http://tools.ietf.org/html/rfc4861#page-19Source Address MUST be the link-local address assigned to the interface from which this message is sent.
If there's no reason to believe that the exploit is already being used in the wild, then I completely agree.
I also think that a good compromise would be to pass on the exploit information to some 3rd party, tasked with releasing full details at a certain date (or simply, the responsible release of the exploit). The focus of pressure would then shift from the researcher to this 3rd party, which would presumably have the means to resist the pressure.
In case someone is curious about the code, visit http://opensource.apple.com/source/xnu/xnu-2050.18.24/bsd/ne... and look for nd6_ra_input()
Reading that code I finally realise why Mac OS X doesn't correctly handle option 24 (alternate routes).
Reminds me of the '90s when WinNuke and Smurf attacks ran wild. Remember one attack that caused our Linux boxes to panic, but I can't remember what it was called. It's not surprising that we're seeing stuff like this in v6. IPv4 has had the bugs hammered out from years of attacks, v6 not so much.
Land? A single spoofed TCP SYN packet with identical src/dst addresses was enough to crash or at least impact many OSs.
http://www.physnet.uni-hamburg.de/physnet/security/vulnerabi...
Except, for this attack you have to be link-local (fe80 is local scope, and so are router advertisements). Realistically, for most server installs you're okay. For coffee shops with an open, insecure broadcast domain, not so much
Ping of Death: http://insecure.org/sploits/ping-o-death.html
> Remember one attack that caused our Linux boxes to panic, but I can't remember what it was called.
teardrop?
Does anyone know how long it took for operating systems to implement ipv4 in a secure and stable manner?
Since this attack is based on Router Advertisements, you need to be on the same LAN to exploit it. It also does not apply if the LAN implements RA Guard (RFC6105).
http://tools.ietf.org/id/draft-gont-v6ops-ra-guard-evasion-0... , but maybe you're right about the LAN part.
http://www.youtube.com/watch?v=8Q8EFwKVKdA for some non-trivial but ingenious ways you can get to a LAN from the outside. (Then again if you're as useless as my ISP, leaving the telnet server on the DSL modem with a default password, listening on the WAN, you don't need to do anything fancy to exploit LANs)
In the video, he tested OS X, Windows XP, and Server 2012. OS X beachballed, XP went to 100% CPU, and Server 2012 panicked and rebooted. All three failed; this isn't just an Apple issue. Was Microsoft notified as well?
In my experience (and as this article suggests), Microsoft operating systems have always been really vulnerable to flooding, even over IPv4. Malformed UDP packets to port 53 (DNS) at about 20-30k packets/sec instantly would lock up a windows box and prevent it from successfully rebooting. This was one of the preferred methods for the wargames that were played for bandwidth over the shared housing network for Microsoft Research interns in China a few years back.
I don't really understand this stuff, but I think this is an already-known vulnerability. It was discovered at least as early as May 2011: http://samsclass.info/ipv6/proj/flood-router6a.htm
The title is incorrect.
"... this one crashes the Mac, and it makes [Windows] Server 2012 restart."
Fixed.
I cannot help but think: ping6 of death!
New technology, same style.
Ha ha! v6 of the teardrop attack (not really).
http://en.wikipedia.org/wiki/Denial-of-service_attack#Teardr...
It would be interesting to know if iOS is affected as well.
According to the page it works on iPads:
Scroll up on the page.