The RCE that AMD won't fix
mrbruh.comOne good thing we can say about Linux bundling all the drivers is that it obviates the need to run almost all of this type of low quality (if not outright spyware) driver management software. They are especially problematic because they can't be sandboxed easily like most other proprietary crap.
For whatever reason, distro maintainers working for free seem a lot more competent with security than billion dollar hardware vendors
> For whatever reason, distro maintainers working for free seem a lot more competent with security than billion dollar hardware vendors
I don't believe that these billion dollar hardware vendors are really incompetent with security. It's rather that the distro maintainers do care quite a bit about security, while for these hardware vendors consider these security concerns to be of much smaller importance; for their business it is likely much more important to bring the next hardware generation to the market as fast as possible.
In other words: distro maintainers and hardware vendors are simply interested in very different things and thus prioritize things very differently.
Years of working in embedded computing have left me with the impression that most hardware companies are just bad at software. I think part of it is that the long cycle times of making hardware push them towards a culture of waterfall development. But years of working with the microcontroller libraries for ethernet PHYs, the bash scripts to build the kernels for SoCs, etc make me perfectly willing to believe they are incompetent with security.
And which are the companies that are good at software? Please, give at least one example.
Apple
Everyone's gonna give you shit for this answer and there's a hundred things I could tell you about their software that pisses me off, but the bar is so low for software these days, their stuff is still in the high end of quality (they need to do a lot to get back to where they were 10 years ago though)
Only other software I regularly use that I think is overall high quality and I enjoy using are the JetBrains IDEs, and the Telegram mobile app (though the Premium upselling has gotten kinda gross the past few years)
These days I use Apple hardware despite Apple’s software, not because of it.
They at least were good at software. The argument that they currently still are good at software is getting weaker and weaker.
XCode and Tahoe beg to differ.
Surely you jest, good Sir!
used to be. they're becoming microslop 2.0
This comes down to intentions versus results. Viewed through the lens of results the comment you're replying to is still correct: The result is incompetence. I'd argue that's the only lens that matters when you're on the receiving end of such work.
Sure. New sales means new revenue. Maintenance and support is just overhead.
It's shortsighted, but modern capitalism is more shortsighted than Mr. Magoo.
It's a cost vs benefit. As long as the cost of such blatant violation of security principles doesn't outweight the benefit of focusing on something else, nothing is done.
https://www.legalexaminer.com/lestaffer/legal/gm-recall-defe...
I don't buy it. It makes sense for a small company where the cost of fixing it might be noticed. But AMD generates some ~$30bn in annual revenues. How much of a developer's time does it take to change the code to use HTTPS? $1000? $5000? Let's be extreme and call it $10,000. That's 0.00003% of AMD's annual revenue. It's barely even a rounding error on their accounts.
Because that's not how corporate maths works. The comparison is not "what is the cost of this vs our current revenue?" The calculation is "what could that engineer be doing instead and what is that worth vs fixing this issue?"
Will fixing this issue bring in more revenue than ignoring it and building a new feature? Or fixing a different issue? If the answer is "no" then the answer is that it doesn't get fixed.
> The calculation is "what could that engineer be doing instead and what is that worth vs fixing this issue?"
I don't agree with this, because it pre-supposes that there's a limited number of engineers available. The question isn't "shall I pull engineer X off project Y so that he can fix security bugs?", it's "shall I hire an additional engineer to fix security bugs?". The comment above mine suggests the answer to that question is "no, because it's too expensive to do that compared to just paying to clean up security breaches after they happen", which is what I was questioning in my first comment.
It doesn't matter: the equation is exactly the same. Why would you hire someone to work on a bug fix or security fix when you could hire that same person and have them work on something even more valuable again?
Now there's a related problem in the premise: it pre-supposes that the company has an unlimited amount of valuable work to be done. If that were the case, all companies would simply expand their workforce as much as possible all the time, only constrained by money running out (which itself would be an exponential increase since "valuable" work presumably leads to more money in future). In reality, companies do not prioritise expansion above all else. In fact any time a company pays a dividend to its shareholders, or otherwise refrains from spending cash reserves on new hires, it's recognising that it cannot invest profits in an effective way into its labour force.
When framed correctly (there's effectively an unlimited labour supply for most companies, and effectively a limited demand for staff) then the question becomes "shall we hire an engineer to fix security bugs when we don't need an engineer for anything else?".
> it pre-supposes that the company has an unlimited amount of valuable work to be done.
In effect, there is, yes. At the very least, there’s more high value work that most companies can do than there are engineers to do said work. There’s a reason literally every leadership course teaches you how to say “no” over and over again.
ah corporate meff, if the claim is lower than the recall cost, pay the claim
You can complain all you want about it but, unless you can push up the cost of the claim, that’s what corporations will do.
First they have to hire a developer with knowledge of how to do this right, as they might not even have one. Which could easily eat 10k+ of dev time as hiring good people takes a lot of time.
You could probably take any user at random from this discussion alone and they'd have the knowledge needed to make the switch from http to https. I'm certain that AMD has all the knowledge they need right now, but even more certain that it wouldn't be hard to hire someone new who does as well
Ok, but this ultimately just comes down to a debate over the amount of the cost. The principle is the same. Even if we double or triple the cost, it's a drop in the ocean for a company like AMD.
You don’t believe it? It took until the early 2000s for Microsoft to take security seriously and they were a money printing machine.
That was a brief chapter in Microsoft’s history. Satya Nadella stopped taking security seriously the day he got in.
I didn't say I don't believe it happens. I'm saying I don't believe it's a based on a cost benefit analysis. I.e. that in a multi-billion dollar company someone consciously ran the numbers and decided "it's cheaper for us to pay to clean up the mess if there's a security breach than it is to hire someone to fix security bugs". The cost of the latter is too low for this kind of logic to make any sense.
I think it's more realistic that in any sufficiently large company the bureaucracy is so unwieldy that sensible decisions become difficult to make and implement.
Ryzen master isn't a driver. Most of its functionality isn't even available in Linux, even with 3rd party tools or drivers.
Is the issue in the OP related to windows? this wasn't immediately clear
Yes, because you would only run such an updater software on Windows.
Aren’t vendors moving to a browser-based control model, where the hardware runs on a local web server that exposes various settings? It sounds terrible for security.
It is, mostly, the organization Linus created (and of course the enormous number of people participating).
An absurd amount of weight is carried by a small number of very influential people that can and want to just do a good job.
And a signal that they're the best is you don't see them in the news.
We need more very influential people who aren't newsworthy.
The most direct comparison would be the package manager, that's why I said distros. These driver management tools do a (poor) job at being a package manager, along with many other commercial software installation tools.
With Linux itself, it helps that they are working in public (whether volunteering or as a job), and you'd be sacked not in a closed-door meeting, but on LKML for everyone to see if you screw up this badly.
Popular Linux distributions also use HTTP CDNs. Even though the content is always signed, it still exposes the HTTP stack, signature verification code and a bunch of the application logic to the attacker.
Apt has had issues where captive portals corrupt things. GPG has had tons of vulnerabilities in signature verification (but to be fair here, Apt is being migrated to Sequoia, which is way better).
But these distros are still exposing a much larger attack surface compared to just a TLS stack.
You don't really need to run these updaters on windows.
The corporate drones have the management tower above them, the OSS enthusiasts do not. That is it.
This is why I've blocked all HTTP traffic outgoing from my machines.
A lot of people have brought this up over the years:
https://www.reddit.com/r/AMDHelp/comments/ysqvsv/amd_autoupd...
(I'm fairly sure I have even mentioned AMD doing this on HN in the past.)
AMD is also not the only one. Gigabyte, ASUS, many other autoupdaters and installers fail without HTTP access. I couldn't even set up my HomePod without allowing it to fetch HTTP resources.
From my own perspective allowing unencrypted outgoing HTTP is a clear indication of problematic software. Even unencrypted (but maybe signed) CDN connections are at minimum a privacy leak. Potentially it's even a way for a MITM to exploit the HTTP stack, some content parser or the application's own handling. TLS stacks are a significantly harder target in comparison.
> Potentially it's even a way for a MITM to exploit the HTTP stack, some content parser or the application's own handling. TLS stacks are a significantly harder target in comparison.
For signed payloads there is no difference, you're trusting <client>'s authentication code to read a blob, a signature and validate it according to a public key. For package managers that usually only mean trusting gpg - at the very least no less trustworthy than the many TLS and HTTP libraries out there.
There's a massive difference. The entire HTTP stack comes into play before whatever blob is processed. GPG is notoriously shitty at verifying signatures correctly. Only with the latest Apt there's some hope that Sequoia isn't as vulnerable.
In comparison, even OpenSSL is a really difficult target, it'd be massive news if you succeed. Not so much for GPG. There are even verified TLS implementations if you want to go that far. PGP implementations barely compare.
Fundamentally TLS is also tremendously more trustworthy (formally!) than anything PGP. There is no good reason to keep exposing it all to potential middlemen except just TLS. There have been real bugs with captive portals unintentionally causing issues for Apt. It's such an _unnecessary_ risk.
TLS leaves any MITM very little to play with in comparison.
> For signed payloads there is no difference, you're trusting <client>'s authentication code to read a blob, a signature and validate it according to a public key.
Assuming this all came through unencrypted HTTP:
- you're also trusting that the client's HTTP stack is parsing HTTP content correctly
- for that matter, you're also trusting that the server (and any man-in-the-middle) is generating valid HTTP responses
- you're also trusting that the client's response parser doesn't have a vulnerability (and not, say, ignoring some "missing closing bracket" or something)
- you're also trusting that the client is parsing the correct signature (and not, say, some other signature that was tacked-on later)
It's trivially easy to disassemble software to find vulnerabilities like those, though. So it's a lot of trust given for an untrusted software stack.
> you're also trusting that the client's HTTP stack is parsing HTTP content correctly
This is an improvement: HTTP/1.1 alone is a trivial protocol, whereas the alternative is trusting the client's much more complicated TLS stack and its HTTP stack.
For technical reasons, unencrypted HTTP is also always the simpler (and for bulk transfers more performant) HTTP/1.1 in practice as standard HTTP/2 dictates TLS with the special non-TLS variant ("h2c") not being as commonly supported.
> for that matter, you're also trusting that the server (and any man-in-the-middle) is generating valid HTTP responses
You don't, just like you don't trust a TLS server to generate valid TLS (and tunneled HTTP) messages.
> you're also trusting that the client's response parser doesn't have a vulnerability (and not, say, ignoring some "missing closing bracket" or something)
You don't. Authentication 101 (which also applies to how TLS works), authenticity is always validated before inspecting or interacting with content. Same rules that TLS needs to follow when it authenticates its own messages.
Furthermore, TLS does nothing to protect you against a server delivering malicious files (e.g., a rogue maintainer or mirror intentionally giving you borked files).
> you're also trusting that the client is parsing the correct signature (and not, say, some other signature that was tacked-on later)
You don't, as the signature must be authentic from a trusted author (the specific maintainer of the specific package for example). The server or attacker is unable to craft valid signatures, so something "tacked-on" just gets rejected as invalid - just like if you mess with a TLS message.
> It's trivially easy to disassemble software to find vulnerabilities like those, though. So it's a lot of trust given for an untrusted software stack.
The basis of your trust is invalid and misplaced: Not only is TLS not providing additional security here, TLS is the more complex, fragile and historically vulnerable beast.
The only non-privacy risk of using non-TLS mirrors is that a MITM could keep serving you an old version of all your mirrors (which is valid and signed by the maintainers), withholding an update without you knowing. But, such MITM can also just fail your connection to a TLS mirror and then you also can't update, so no: it's just privacy.
> HTTP/1.1 alone is a trivial protocol
Eh? CWE-444 would beg to differ: https://cwe.mitre.org/data/definitions/444.html
> the alternative is trusting the client's much more complicated TLS stack and its HTTP stack.
An attacker doesn't get to attack client's HTTP stack without first piercing protection offered by TLS.
If you don't trust the http client to not do something stupid, this all applies for https, too. Plus, they can also bork on the ssl verification phase, or skip it altogether.
TLS stacks are generally significantly harder targets than HTTP ones. It's absolutely possible to use one incorrectly, but then we should also count all the ways you can misuse a HTTP, there are a lot more of those.
Doesn't this break CRL fetching and OCSP queries?
Nothing really cares except like Prusa Connect.
AFAIK a lot of linux packet repositories are http-only as well. Convenient for tracking what package versions have been installed on a certain system.
They usually support both, but important to note that HTTPS is only used for privacy.
Package managers generally enforce authenticity through signed indexes and (directly or indirectly) signed packages, although be skeptical when dealing with new/minor package managers as they could have gotten this wrong.
Reducing the benefit of HTTPS to only privacy is dishonest. The difference in attack surface exposed to a MITM is drastic, TLS leaves so little available for any attacker to play with.
This is super bad right? Like anybody who has this running will be vulnerable to a super basic HTTP redirect -> installer running on their machine attack, right? And on top of that it's for something that is likely installed on _so many_ machines, right?
I don't think I've ever seen something this exploitable that is so prevalent. Like couldn't you just sit in an airport and open up a wifi hotspot and almost immediately own anyone with ATI graphics?
You can get arrested for this in my country, fun fact.
I guess that's how you prevent anything, just make it illegal and the exploit becomes an unintended illegal feature, like occupying the low-freq radio signal.
Not that this isn’t bad, doesn’t this only apply when an update is available?
So you have to be on a shady hotspot, without VPN, AMD has recently published an update, and your update scheduler is timed to run.
That would be a little less than “immediately own anyone with ATI”.
You need only a device on network to spam DHCP messages with malware DNS. So you don't need "shady hotspot", only compromised device within network.
Oh yeah fair point, the HTTPS-ness of the first step is a helpful backstop
If somebody is MITMing a target person, they will respond positively to "update available?" calls from that person and then serve the tainted update. The article does not say what the frequency of auto update check is. Let's say one per day. If somebody is targeted it's one day away from RCE.
The update check is HTTPS, only the files themselves are HTTP.
TLS doesn’t mask the IP of the server. The updater probably isn’t using DNS over HTTPS. If I can determine that a user’s updater just hit the update check server, I can start impersonating the update server.
That takes it out of the one day away territory, but it does allow an attacker to only have a malicious HTTP capture up and detectable during the actual attack window.
Then, of course, if you’re also being their DNS server you can send them to the wrong update check server in the first place. I wonder if the updater validates the certificate.
I missed that, thanks!
Who would connect to unknown person's hotspot?
But it seems pretty trivial for some bad actor at local ISP.
> Who would connect to unknown person's hotspot?
SSID "Sydney Airport Wifi" or the like.
You're more [security minded](https://www.schneier.com/blog/archives/2008/03/the_security_...) than me
This is oh sweet summer child stuff.
Have you ever gone to a crowded public place and setup an open hotspot?
Ah I think I never had to do connect to a public open hotspot because by the time I grew up 4G and then 5G internet were commonplace.
Never travelled to another country and needed internet before you could get a local sim working?
Airport is prime for this but in general the average person keeps wifi on and the click through on an Android to use open networks is so seamless
esim for the win.
And you have them in your laptop? Or just using your phone and don't own a laptop at all?
I would use my phone's 'hotspot' long before I tried random wifi on my laptop?
Are you being willfully obtuse or do you actually believe that's true of everyone? There are so many reasons why someone might have a laptop in such a situation but not be able to use a hotspot on their phone - it's not even worth listing them.
That isn't what this thread suggested; I was supporting plausibility of the GP's claim never to have used public Wifi because of good 4G; stating they could still have used a laptop in public. My wording was also explicit that this isn't always an option.
> Like couldn't you just sit in an airport and open up a wifi hotspot and almost immediately own anyone with ATI graphics?
Some of us do not enable automatic updates (automatic updates are the peak of stupidity since Win98 era). And, when you sit in an airport, you don't update all your programs.
Automatic updates are absolutely not peak stupidity. Most users’ devices would have nasty security vulnerabilities wide open for a much longer period of time without automatic updates.
This asuming Automatic updates fix security vulnerabilities, which is almost never the case.
Laughably ignorant update policies.
So compromising one DNS lookup is sufficient, ex:
1. Home router compromised, DHCP/DNS settings changed.
2. Report a wrong (malicious) IP for ww2.ati.com.
3. For HTTP traffic, it snoops and looks for opportunities to inject a malicious binary.
4. HTTPS traffic is passed through unchanged.
__________
If anyone still has their home-router using the default admin password, consider this a little wake-up call: Even if your new password is on a sticky-note, that's still a measurable improvement.
The risks continue, though:
* If the victim's router settings are safe, an attacker on the LAN may use DHCP spoofing to trick the target into using a different DNS server.
* The attacker can set up an alternate network they control, and trick the user into connecting, like for a real coffee shop, or even a vague "Free Wifi."
It's usually very simple to get someone to join your malicious WiFi network with SSID spoofing, jamming, etc.
Just spoofing a DNS reply would be enough if it arrives first, wouldn't it?
Can anyone rationalize this decision? Sure technically this is outside the stated scope however the severity of this vulnerability is immediately obvious, which should trigger some alarm bells that the scope needs to be reconsidered.
If they lose just one customer over this they're losing more than the minimum $500 bounty. They also signal to the world that they care more about some scope document than actually improving security, discouraging future hackers from engaging with their program.
This would be a high severity vulnerability so even paying out $500 for a low severity would be a bit of a disgrace.
What's the business case for screwing someone out of a bounty on a technicality?
Honestly, even if it were in scope, just them getting paid is a bit odd given how AMD has been made aware of this multiple times over the years.
If this is as described, it's a pretty major failure of security-vulnerability report triage, and rises to the level where security departments at major corporations will be having meetings about whether they want to ban AMD hardware from their organizations entirely, or only ban the AMD update application. If this had gone the "brand name and a scored CVE" route, it would probably have gotten a news cycle. It might still get a news cycle.
The threat model here is that compromised or malicious wifi hotspots (and ISPs) exist that will monitor all unencrypted traffic, look for anything being downloaded that's an executable, and inject malware into it. That would compromise a machine that ran this updater even if the malware wasn't specifically looking for this AMD driver vulnerability, and would have already compromised a lot of laptops in the past.
Anyone can request a CVE, this is sadly the most likely path towards getting it fixed.
They're not considering it not to be a vulnerability. They're simply saying it's outside the scope of their bug bounty program.
Apparently it's also outside the scope of their bug fixing program, despite being trivially remotely exploitable to get privileged code execution.
Man in the middle attacks may be "out of scope" for AMD, but they're still "in scope" for actual attackers.
Ignoring them is indefensibly incompetent. A policy of ignoring them is a policy of being indefensibly incompetent.
The only thing cited here is a response from their bug bounty program. Excluding MITM from a bug bounty is perfectly legitimate. Actually, excluding anything from a bounty program is.
The response from the screenshot appears to be a "out of scope" response, but the blog poster used some editorial leeway and called it "wont fix/out of scope". Going forward, we can keep de-compiling and seeing if this vulnerability is still there and whether "wont fix" was a valid editorialization.
Though, by publishing this blog and getting on the HN front page, it really skews this datapoint, so we can never know if it's a valid editorialization.
Edit: Ah, someone else in this thread called out the "wont fix" vs "out of scope" after I clicked on reply: https://news.ycombinator.com/item?id=46910233. Sorry.
Excluding severe vulnerabilities like ones that completely pwn your machine just by connecting it to an untrusted network is not legitimate for any reasonable bug bounty program.
Of course, a company can do it (they just did!), but it shows that they don't care about security at all.
Especially if the answer is "sorry this is out of scope" rather than "while this is out of scope for our bug bounty so we can't pay you, this looks serious and we'll make sure to get a patch out ASAP".
Ethical disclosure existed before bug bounties. Someone who wants to ensure the remediation of the bug might recognize that the staff member responding to bug bounty reports is limited in their purview and might be badly trained. Upon learning that it is out of scope for the bug bounty program did the author try their security@ or another a referenced security contact?
Your characterization of this bug as one "that completely pwn your machine just by connecting it to an untrusted network" is also hyperbolic to the extreme.
Looks like there's a serious security bug in their scope document.
If you read it carefully, you'll notice that the blog post misrepresents the AMD response.
The blog post title is "AMD won't fix", but the actual response that is quoted in the post doesn't actually say that! It doesn't say anything about will or won't fix, it just says "out of scope", and it's pretty reasonable to interpret this as "out of scope for receiving a bug bounty".
It's pretty careless wording on the part of whoever wrote the response and just invites this kind of PR disaster, but on the substance of the vulnerability it doesn't suggest a problem.
How's that? What do you think the purpose of a bug bounty is? If you think it's "to eradicate all bugs", no, very no.
I don't expect an unbounded scope but I do expect it to cover the big scary headline items like RCE. Additionally, this can be exploited without MitM if you combine with e.g. a DNS cache poisoning attack. And they can still fix it even if they're not willing to pay a bounty.
DNS poisoning is a MITM vector; in fact, it's the most popular MITM vector.
Really? I thought MitM was always intercepting/manipulating traffic from or to the victim.
What you wrote is the definition of MITM.
Op and others are saying DNS poisoning is a popular way of achieving that goal.
Oh you mean that it's a popular way of initiating the interception part of MitM, got it.
This is the place they direct researchers to report bugs. If they don’t want to pay out for MITM, that’s fine, but they should still be taking out-of-scope reports seriously
+1 Bounty aside, this deserves attention. I wouldn't want to award bounties for MitM either if I made it so easy. They closed the issue as 'out of scope'... with no mention of follow-up (or even the bounty we don't care about).
I'm skeptical to say the least. Industry standard has been to ignore MitM or certificates/signatures, not everything.
A bug bounty should motivate exploitable bugs to be reported so that they can be fixed. IMO, if it refuses to accept certain kinds of bugs that can still be exploited, it's not working properly.
A bug bounty directs internal engineering efforts. It can't eradicate bugs; that's not how bugs work.
I wasn't agreeing with your example.
Wow, this is an extremely serious vulnerability. People writing it off because it requires MitM. There's always a MitM, the internet is basically a MitM.
MitM isn't even necessary, a rogue DHCP server configuring a malicious DNS could attack this.
That's still a MITM, albeit a LAN-local one. Non-LAN WAN isn't the total scope of MITMs.
If my computer asks your computer what dns server to use, and you respond with the address of a nefarious one, it's not necessarily a mitm.
That is a form of MiTM. It’s just changing DNS to IP bindings rather than IP to MAC or prefix to ISP.
FTA: “Therefore I will have to close the report as out of scope”
and “05/02/2026 - Report Closed as wont fix/out of scope”
I think it’s a bit early to say “won’t fix”. AMD only said that it was out of scope for the channel used to report it (I don’t know what that was, but it likely is a bug bounty program) and it’s one day after the issue was reported to them.
AMD AutoUpdate terminal always pops up at midnight for me and then requires me to dismiss it. I've been meaning to uninstall this but always forget about it the next morning.
Now I have good reason to block it entirely and go back to manual updates
Omg that's what it is. I've been looking for DAYS.
When I still used Windows that console window would show up and hang for hours. I'd finally close it when shutting down my PC, and it was guaranteed that the driver would be gone on the next boot and I'd need to install it again.
It's the shittest autoupdater I had to ever deal with. It never actually managed to install an update.
From the title, I thought this was going to be another one of those speculative execution information leakage bugs that are basically impossible to fix, but something this simple and easily fixable -- it's discouraging. Hopefully this decision is reversed. Also "Thank you for hacking our product" seems a bit unprofessional for someone engaging in responsible disclosure for a major security issue with your product.
"Thank you for hacking our product" sounds perfectly appropriate to me; it clearly uses "hacking" in the positive sense.
It actually says "hacking on one of our programs", which makes it even more obvious that it's using the word closer to the positive traditional hacker culture sense.
I'm sure that still looks unprofessional to some people, just like any jargon that isn't corporatese does.
While I don't like that the executable's update URL is using just plain HTTP, AMD does explicitly state that in their program that attacks requiring man-in-the-middle or physical access is out-of-scope.
Whether you agree with whether this rule should be out-of-scope or not is a separate issue.
What I'm more curious about is the presence of both a Development and Production URL for their XML files, and their use of a Development URL in production. While like the author said, even though the URL is using TLS/SSL so it's "safe", I would be curious to know if the executable URLs are the same in both XML files, and if not, I would perform binary diffing between those two executables.
I imagine there might be some interesting differential there that might lead to a bug bounty. For example, maybe some developer debug tooling that is only present only in the development version but is not safe to use for production and could lead to exploitation, and since they seemed to use the Development URL in production for some reason...
For paying out, maybe, but this is 100% a high priority security issue regardless of AMD's definition of in scope, and yet because they won't pay out for it they also seem to have decided not to fix it.
> is a separate issue.
No, just no. This is not a separate issue. It is 100% the issue.
Lets say I'm a nation state attacker with resources. I write up my exploit and then do a BGP hijack of whatever IPs the driver host resolves to.
There you go, I compromised possibly millions of hosts all at once. You think anyone cares that this wasn't AMDs issue at this point?
You misunderstand.
I already said I do not like that it is just using HTTP, and yes, it is problematic.
What I am saying is that the issue the author reported and the issue that AMD considers man-in-the-middle attacks as out-of-scope, are two separate issues.
If someone reports that a homeowner has the keys visibly on top of their mat in front of their front-door, and the homeowner replies that they do not consider intruders entering their home as a problem, these are two separate issues, with the latter having wider ramifications (since it would determine whether other methods and vectors of mitm attacks, besides the one the author of the post reported, are declared out-of-scope as well). But that doesn't mean the former issue is unimportant, it just means that it was already acknowledged, and the latter issue is what should be focused on (At least on AMD's side. It still presents a problem for users who disagree with AMD of it being out-of-scope).
The phrasing of your first two sentences in your first post makes it sound like you're dismissing the security issue. For saying that it's a real security issue and then another issue on top you should word it very differently.
> The phrasing of your first two sentences in your first post makes it sound like you're dismissing the security issue.
Genuine question, How does it sound like I'm dismissing it? My first sentence begins with the the phrase
> I don't like that the executable's update URL is using just plain HTTP
And my second sentence
> Whether you agree with whether this rule should be out-of-scope or not is a separate issue.
which, with context that AMD reported MITM as out-of-scope, clearly indicates that I think of it as an issue, albeit, a separate one from the one the author already reported.
Meta: somewhat surprised that they're still using ati.com. ATI was acquired back in 2006:
Why even bother with WONTFIX? Turning on an nginx LetsEncrypt in front of it would have taken as long.
How the hell is it possible that they're still using the ATI domain and HTTP 2026? They acquired ATI 20 fucking years ago.
It really makes you wonder what level of dysfunction is actually possible inside a company. 30k employees and they can't get one of them to hook up certbot, and add an 's' to the software.
Marking this as a WONTFIX should have gotten somebody fired at AMD. I find it hard to believe that at least one of their VPs doesn't frequent this site.
I don't normally call for people to get fired from their jobs, but this is so disgusting to anyone who takes even a modicum of pride in their contribution to society.
Surely, someone gets fired for dismissing a legitimate, easily exploited RCE using a simple plaintext HTTP MITM attack as a WONTFIX... Right???
What is the root cause of this? It is said that AMD is hardware company and neglects software - but recently they issued lots of declarations of becoming software firs now.
Bugdoors, bugdoors everywhere..
The fact that they refuse to fix is the sketchiest part, and also they should be held accountable for things like this IMO
Many people don't worry about connecting to random wifi anymore, but users of AMD still have to
I do usually worry - because DNS spoofing is still possible and we are one step (eg: a compromised certificate) away from being pwned. But yeah one shouldn't have to worry.
Thanks for this, author.
No https:// and no cryptographic signature nor checksum that I can see. This makes it almost trivial for any nation-state to inject malware into targeted machines.
I removed AMD auto-update functionality from Windows boxen. (And I won't install anything similar on Linux.) And, besides, the Windows auto-update or check process hangs with a blank console window regularly.
Such trashy software ruins the OOBE of everything else. Small details attention zen philosophy and all that.
If this is true, it seems like a much more serious vulnerability than I was expecting when I clicked the link.
And it's obviously an oversight; there is no reason to intentionally opt for http over https in this situation.
It's not directly an RCE unto itself, it requires something else. A compromised DNS on the network, e.g. So no surprise they ignored it.
Also, if AMD is getting overwhelmed with security reports (a la curl), it's also not surprising. Particularly if people are using AI to turn bug bounties into income.
Lastly if it requires a compromised DNS server, someone would probably point out a much easier way to compromise the network rather than rely upon AMD driver installer.
As someone that works security, the whole "A compromised DNS on the network" would be a total excuse not to pay.
The fact is allowing any type of unsigned update on HTTP is a security flaw in itself.
>someone would probably point out a much easier way to compromise the networ
No, not really. That's why every other application on the planet that does security of any kind uses either signed binaries or they use HTTPSONLY. Simply put allowing HTTP updates is insecure. The network should never be by default trusted by the user.
What's even fucking dumber on AMDs part is this is just one BGP hijacking from a worldwide security incident.
> The fact is allowing any type of unsigned update on HTTP is a security flaw in itself.
Reminds me about ten years or so ago when I was installing Debian or something and I noticed the URL for the apt install mirrors were http and not https. People helpfully pointed out this is a non issue because the updates are signed.
Ok I guess but then why did Debian switch to https?
> Ok I guess but then why did Debian switch to https?
Because security people kept bullying them?
You're completely misunderstanding the impact. If you run AMD's software you're effectively giving root access to your computer to any wifi network you connect to and any person who happens to be on that network.
It really just requires a network that doesn't use some kind of NAC since you can trivially do ARP poisoning of your target.
Even though the software might have been written by AMD, I think there is at least one more party to blame.
Who has put that software on the PC in the first place?
Was it the manufacturer?
Or was it Microsoft via Windows?
This is unfortunate news but I'm not even surprised that they don't seem to care. Nice writeup.
Based on the policy (and my hat) I have to assume some business partner failed to maintain the 'ca-certificates' equivalent for Windows (or NTP) and was rewarded in their insane demand for plaintext.
So easy to fix, just... why? My kingdom for an 's'. One of these policies are not like the others. Consider certificates and signatures before categorically turning a blind eye to MitM, please: you "let them in", AMD. Wow.
> This means that a malicious attacker on your network, or a nation state that has access to your ISP can easily perform a MITM attack and replace the network response with any malicious executable of their choosing.
http://www2.ati.com/...
I'm blocking port 80 since forever so there's that.But now ati.com is going straight into my unbound DNS server's blocklist.
I was in the shop for new PC today and decided on 9950x3d but I don't know how I opened HN just before the checkout and now I am a happy owner of intel 14900!
>Attacks requiring physical access to a victim's computer/device, man in the middle or compromised user accounts
I love how they grouped man in the middle there
Spooky, this is not exposure if using Linux?
Of course not, the vulnerability is in "AMD’s AutoUpdate software" (i.e., vendor trash).
If you’re using Linux you’re almost certainly using your package manager to get any relevant updates.
> This means that a malicious attacker on your network, or a nation state that has access to your ISP can easily perform a MITM attack and replace the network response with any malicious executable of their choosing.
I am pretty sure, a nation state wanting to hack an individual's system has way more effective tools at their disposal.
Presumably, all Windows installations running on AMD are auto-executing this auto-update program.
I am pretty sure nation states hire people smart enough to use whatever works.
What the hell is more effective than getting root with a trivial MITM?
Not only is it effective, it's stealthy, in that it doesn't out you. It's obviously possible to both find and exploit it without a huge investment, which means nobody knows you're a nation state when you use it. You don't have to risk burning any really arcane zero-days or any hard to replace back doors.
Nation states are absolutely going to use things like that. And so is everybody else.
I guess one should keep their eyes out on the next big BGP hijack.
...such as talking directly to AMD or even Microsoft, which is scarier as Windows Updates are signed, and as long as they can be convinced to sign the right thing, it'll look even more legit.
Auto Update is EVERYTIME a RCE. When the software checks a signature, you just need the key. And the delivering enterprise have the key. EVERYTIME.
Don't understand why most people mean auto updating software would in any way create more security. It just creates more attack vectors for every software that has a auto updater.
Remote Code Execution (RCE) is a type of vulnerability. Intentionally running code from a developer you trust is not a vulnerability.
An auto-update mechanism only becomes an RCE if it allows unauthorized third parties to execute code on your machine by failing to verify that the code comes from a legitimate source.
> you just need the key
Secrecy of cryptographic keys is the basis of all cryptography we use. There's no "just", you need the key and you don't have it.