Nvidia Hit by Major Cyberattack
wccftech.comThe article lacks a lot of information unfortunately, but it makes it sound like the website (distribution channel) was the only part they are concerned about, which wouldn't be classed as major.
What I'd class as major would be some third party gaining access to NVIDIA's RTL designs and source code for their drivers for current and unreleased GPUs, but this hack doesn't sound remotely close to that. Luckily.
> the website (distribution channel) was the only part they are concerned about, which wouldn't be classed as major
By whom? I'd certainly class it as major if their website could distribute malware instead of the real drivers, as that impacts everyone. Stealing nvidia's proprietary designs impacts only them.
I visited that page a few days ago to setup a new system which is, at the same time, supposed to be very secure (the proprietary drivers being one of the weak points indeed, but can't quite get around that if the GPU is to be fully functional). If this was compromised then I can start over and have a bunch of passwords and private keys to rotate.
Maybe should consider doing the rotation already... Better safe than sorry in such cases.
> What I'd class as major would be some third party gaining access to NVIDIA's RTL designs and source code for their drivers
Ransomware operators are not that clever, they go for low hanging fruit. I mean, yeah, by all means, do recon on a system you just pwned and try to do a supply chain attack, but it's outside the range of these operators. They only have a hammer, and everything just looks like a nail.
Even if they get the RTL I'm not sure how useful those would be. While Russia does have semiconductor fabs, apparently their smallest node is around 65nm, completely useless for the large designs current NVidia GPUs use. At best they could have them made at a fab in mainland China, but even there the smallest node is only 14nm.
A thief would be using the RTL to make a clone of NVIDIA graphics card, they’d be using the IP cores as modules in their own designs. With some minor adjustment it shouldn’t be too difficult to get at least most of the RTL working in a different mode (maybe lower clock speed)
That's not how VLSI chip design works. You can't just take the RTL designed for 5 - 8 nm, zoom it up to 65nm and expect it to still work.
When you design a CPU or GPU, the RTL, like the core pipelines, schedulers, and various buses, are designed from the start on a certain manufacturing process where they're expected to work correctly at specific frequencies that are fast enough to feed the pipelines at the right timings, in order to get the top expected performance. Failure to meet the fabrication process expectations means the RTL design will perform much worse than expected in practice.
That's why many of Intel's past designs sucked so bad in the performance and efficiency category as their 10nm manufacturing process fell behind, so they had to scale their newer designs back on the aging 14+++++ process, which caused those CPUs to flop big time.
>maybe lower clock speed
That is an understatement. 65nm is ten times larger than what NVidia is currently using. That means the area would be 100 times larger and any signal distances 10 times larger. And keep in mind that NVidia GPU designs already take up quite a bit of area on modern nodes.
So you'd likely have to cut it down to a 100th of modules which would run at 10th speed.
Is signal propagation actually close to being a limiting factor in clock speeds for most designs? I thought it's pretty much always thermals.
That is not the point; the signal propagation times in the VLSI blocks are engineered to work properly at the specific physical size. If the structures are scaled to a larger node size, the timing variances increase. If you want to do this, you can either 1) reengineer all the VLSI blocks to meet timing requirements at the larger node size (maybe impossible) or 2) slow the clock speed to loosen the requirements.
Isn't it exactly the point? If, provided sufficient cooling, you could double the clock speed without running into clock skew or other timing issues, then timing issues shouldn't be a problem if you want to scale things up physically by 50% without touching the clocks. I don't think you'd have to lower clocks by 90% to increase size of most designs tenfold. Or rather, that the reason you'd have to if you did wouldn't be due to signal propagation time.
There are EUV machines in China. Not sure why people keep perpetuating the myth that there aren't.
My guess is it's Linux user group trying to finally liberate the source code for their graphics card drivers.
/humor
Ah, the notorious Penguin Collective.
Don't they publish books, too?
Literally my first thought. Finally, no more graphics driver problems.
"We'll send them AMD"
"Ammunition of Mass Destruction?"
"No..."
Thanks for the downvotes, appreciate it. I thought it was funny but apparently people have a stick up their ass
https://twitter.com/vxunderground/status/1497484483494354946
LAPSU$ extortion group, a group operating out of South America, claim to have breached NVIDIA and exfiltrated over 1TB of proprietary data.
LAPSU$ claims NVIDIA performed a hack back and states NVIDIA has successfully* ransomed their machines.
Putting on my Paranoia hat: what if some aggressor indeed was able to introduce code into the Nvidia drivers, which - if put on enough systems - would cripple the ability to (re-)train Ai systems which might be used in military defense systems. What if - even worse - people decided to use Nvidia hardware in the inference systems as well…
Putting down the paranoia hat. Happy weekend.
> would cripple the ability to (re-)train Ai systems which might be used in military defense systems
Not sure you're familiar with defense update and release schedules. As long as this gets fixed sometime in the next 5+ years, everything will be fine.
> would cripple the ability to (re-)train Ai systems which might be used in military defense systems.
Crippling use-cases is quite difficult: how could you distinguish at hardware/firmware-level object detection for fighter jets vs object detection for cars. Under the hood everything is just a bunch of compute units with extremely wide ALUs. I would even say, it's next to impossible to cripple "AI" without crippling graphics engines and most GPGPU kernels.
EDIT: Ah, you meant drivers. Yeah, that's perhaps more doable (since the OS can provide context on the calling application), also more detectable by the end-users: many people diff drivers to find patched vulnerabilities, security researchers would eventually notice it.
Picking up said hat, we can ask why they would duplicate functionality already in the hardware if they could just steal the keys.
It's not a very good hat, honestly.
That's just a very very weird though. Sry but no one just hacks into Nvidias driver dev department and injects complex code to cripple ml training.
It's just nothing someone can just do. And there is also nothing which will prevent Nvidia to debug the ml issue and revert the change.
AI aside, hacking into the driver’s build process to inject hidden backdoors into the drivers could be a realist attack.
Is it realistic though?
Hacking into Nvidias corp network, infiltrating their git server, disabling security scans and then injecting a backdoor undetected in complex code?
In a process which is highly controlled due to it being a very central peace of software.
Very unrealistic.
It's easier to find or buy zero days in the wild for the same goal
Well.. that's exactly what happened to Solarwinds last year, didn't it?
Actually smarter than that - they got into the build system and added the malicious code in the build process so you couldn't see it in the repository.
Do you think it's that difficult for a state sponsored body to infiltrate into a commercial company?
The effort my big software company does on regards of requirements of releasing software, I would say yes.
Big companies like Nvidia have background checks, independent security teams etc.
Impossible? No. But easier and cheaper is still other means.
Didn't a bunch of Linux distro s get infected with a "Ken Thompson Hack" a while back?
https://softwareengineering.stackexchange.com/questions/1848...
Ok I think it was Delphi now, but my brain remembered debian. lol.
There is a double cross compilation method to detect if you are infected.
This has always been a problem. Third party closed source OS components are a massive security risk. Te people of the next century will look back on us as barbarians.
Rooting for leaks of info/keys/specs that allow nouveau to legally evolve.
> Another major concern is that NVIDIA will now have to ensure that their services and the software they are providing to end-users is entirely free of any viruses or malicious code that could affect them.
I don't know, things like this just show how great it is to put unknown code into your kernel.
Russians?
Well, it has potential for a singularly unpleasant watering hole attack, few states have a blindingly obvious track record of it, and exactly one of those is in the acute phase of open acts of aggression warfare... Seems pretty clear what's the highest priority working hypothesis until the evidence is in.
Number two could well be entertaining ideas about shaving a couple of items off their conquest list while the action is keeping the World busy though, and if so both trojanizing a particularly poorly defended part of billions of computing devices worldwide and securing fuller access to software and plans for "AI accelerators" would seem desirable.
Or all their talent made itself scarce when they saw the writing on the wall and what they have left is scriptkiddies who are capable of defacing a website when given a target to take down.
It's bad to underestimate the enemy, but also bad to overestimate them.
It's bad to make so many powerful enemies.
Hopefully Putin realizes they has no more claim over whatever holy grail they are chasing than anybody else.
Coincidence?