Valve accused of ignoring existing RCE vulnerability in Source games for 2 years
twitter.comI have a friend who used to work at Valve as a software engineer - he mentioned to me that the entire source networking stack is chock full of unchecked buffers and all sorts of potential for fairly trivial RCEs, but due to Valve's internal structure (or lack thereof) there really isn't any incentive for anyone to fix them.
This was 5-6 odd years ago and he no longer works there, so things might have changed, but based on this tweet it seems unlikely.
> due to Valve's internal structure (or lack thereof) there really isn't any incentive for anyone to fix them
This seems to be a common theme with problems at Valve.
This seems common in the industry at large. At my job it's impossible to fix an issue unless someone specifically puts in a ticket for it. I look at all the bugs in the code taunting me. Little landmines either nobody has stepped on yet or was too lazy to write a ticket for. Some tickets languish for years in the tracking system we use until the almighty scrum master doles it out. I am in hell.
At my previous client, the scrum master didn't decide 100% of the work, we could pick a small percentage of technical items to work on.
The scrum master decides the work? Why has agile become such a mess?
It’s been a while, I actually forgot a bit about the process, and I got it mixed up. Point being: tasks without a clear business driver were regularly brought into the sprint by devs.
Other people choose their own tasks off the backlog. I chose my own and got a talking to. :)
Our lead says that all work must come from the scrum master but in practice it is selectively enforced.
Game devs don't optimize for security, because they're not incentivised to.
This is a common problem in other parts of the software industry, but Valve is missing a piece of the solution.
The typical problem at software companies is that developers are incentivized only to write code for new features that will land them promotions and look good on their resume--but bugfixes and security work is not part of that.
Management can counteract this with top-down initiatives. Programs like "fix-it week" or teams dedicated to security with different incentives in place. For example, Google suffers from the "promotion-oriented programming" about as badly as any other company, but they manage to take security seriously.
Valve has "flat hierarchy", which goes in quotes because the hierarchy isn't really flat, it's just hidden. Because the hierarchy is hidden, it's harder to address large-scale problems like institutional priorities... because there are fewer people to delegate large-scale problems to.
And then their MMO/MMORPG server gets p0wned, with everyone taking advantage of extra virtual money, adding assets to their characters for free and auto aiming packet correction.
and, as evidenced by Grand Theft Auto and Counter-Strike, players continue playing with hackers.
There is even reason for (say, for example) Rockstar to leave hackers alone in GTA : they act as artificial whales to lure real players into buying in-game currency in order to keep up/seek revenge.
There are a few games I can think of off the top of my head that have a symbiotic relationship with hackers.
The kind of hacking that happens in first person shooters has nothing to do with security failures. It is fundamentally impossible to stop aim bots. All you can do is continually play cat and mouse games to make it harder.
Why is it fundamentally impossible to stop aimbots?
Because the player's computer needs to know where the enemy is in order to render them on the screen, create footstep sounds, calculate shadows, etc. As long as the player has ultimate (root, admin, etc) access on that computer, it will always be possible for a program running with elevated privileges to read that enemy position data from the game's memory and make the required mouse movements to point at it and left-click.
The only way to prevent this is to remove elevated access from the player's computer. This has been done with varying levels of success on consoles, but even then it's only a matter of time.
because it requires you to distinguish between a human's aim and a bot's aim, which is pretty much impossible with a good enough bot
I think Steam actually primarily catches cheaters by it seeing the other running software rather than looking at the input patterns. I'm not sure whether it would detect a cheat implemented via a camera and custom mouse hardware that sends usb events you didn't do.
But I also think a lot of the hackers in both GTA and CS are cheating in ways that no regular user input could trigger, they're compromising the software at a lower level than that.
From what I know, VAC (Valve Anti Cheat) just looks for processes running on the system and detects injections into CSs memory. Then, for CS:GO specifically there's Overwatch, in which players look at other reported players' gameplay to determine whether they were cheating, and VacNET which is a machine learning system trained on the data from Overwatch to detect aimbots that way. There's a really good talk that someone from valve gave about 3 years ago[0].
The bigger problem is that even with input recognition, one of the biggest problems are wallhacks, meaning you can see other players through walls which is an advantage that's almost as large as aimbotting in tactical shooters like CS.
The comment above was specifically about aimbots, i.e. cheats that mimic a person. Such cheats can be hidden from the system well enough for it to not know there is a program controlling the input. I'm not saying all kinds of cheats can stay 100% undetected and functional with enough effort on the hackers' side, that's obviously false.
Indeed, there's still plenty of these.
Dozens of Counter-strike exploits exist and the cheating scene has just grown too rampantly. Valve simply doesn't care about the source engine. Any new CSGO player will tell you the anti-cheat doesn't work, I know first-hand.
The lack of care regarding source engine netcode extends to every part of the source engine, including Valve Anti-cheat.
The anti-cheat is trivial to reverse (several PUBLIC bypasses have existed for years on github, with zero patch), the engine source has been leaked, reverse engineered, and fiddled with by thousands of 14 year old kids. It is pathetically easy to bypass, for example, by changing a single byte in memory you can see through walls, see enemy money, etc. See this video I found about how miserably broken it is: https://files.catbox.moe/8e3bxz.mp4
It is in my opinion the greatest loss to gaming that a classic, legendary game like Counter-strike got completely ruined by lack of care by a company that profits millions off of the case unboxings.
> CSGO player will tell you the anti-cheat doesn't work, I know first-hand.
> It is in my opinion the greatest loss to gaming that a classic, legendary game like Counter-strike got completely ruined by lack of care by a company that profits millions off of the case unboxings.
have you played the game in recent years? this has not been the case for me or the people I play with at all.
when playing on high trust-factor accounts, cheating is basically eliminated.
the experience for newer players is pretty bad but once you convince the system you're trustworthy, the algorithm does an extremely good job of not matching you with cheaters.
what valve lacks in boring, sensible solutions they make up for with interesting often much more complex workarounds (see: the open-world csgo danger-zone map shoved into a game with a room-based engine)
Just 2 days ago on prime I ran into a string of cheaters. At one point we had 2 on the enemy team and it caused someone on my team to go toggle. 3 cheaters in one match. On old accounts with everything.
I know he couldn't be an expert but the person on my team says he can he blatant every game and never get banned because we're on prime. I don't want to believe that but then he had a lot of items and didn't mind spinbotting at all.
From my understanding the CS:GO matchmaking basically ranks how likely of a cheater it thinks you are, and matches you with people of a similar ranking. If you're queuing with people that are bragging about blatantly cheating you're probably in the "likely cheater" group.
This is all really just anecdotes, but here's my counter anecdote. I play csgo on and off with friends. None of us have ever cheated in csgo (or any other competitive online game for that matter). I'd say we get about 1 obvious cheater every 50 games, with 2-3 less obvious "maybe they're using wallhacks" as well. This is significantly improved from 3-4 years ago where we got a cheater once every 4-5 games.
I didn't queue with the cheater. He was a random on my team who happened to turn the cheats on when we were losing to a cheater. He left the game and everything to launch them.
The rest of us aren't cheaters. We have old steam accounts with lots of games, items, and play time. We have prime. We still got put into that lobby. I'm not good enough to look like a cheater on my playing alone either.
Exactly! I get blamed for cheating and reported (trust me I don't) so I whenever I play matchmaking I'm also in the "likely cheater" group. That's why I play on third party server with their own anti cheat system.
>when playing on high trust-factor accounts, cheating is basically eliminated.
Yes, but this is not a technical fix. You just hope that accounts with more "value" cheat less. Which is true in most cases.
That's another HUGE issue: stolen accounts are a massive underground market and while your skins can't be stolen usually, the account can be hacked on and get banned. You can get stolen prime accounts for under $5 and high value accounts for very cheap.
Trust does actually work a lot of the time. But you'd think account security would be easy for them to crack down upon.
Trust DOES help an immense amount. New players will NOT have good trust, though, hence why I said ask a new CSGO player.
If you play on Asia region there’s 8/10 chance you will be matched with a hacker from China. The hacking industry there is making serious money.
>and fiddled with by thousands of 14 year old kids
People think you're kidding, but it's really that easy on Source! For a while, the most popular TF2 (a Valve Source game) hack was created by a 15 year old. He made at least a million dollars in profit too! (can't remember if this factoid was verified or not, but he can definitely pay for college now) I wasn't as nearly as talented but I made some hacks for fun when I was 15 or 16 years old.
Video game cheats and anti-cheats are almost completely disjoint from remote code exploits like what are reported in the OP.
>The lack of care regarding source engine netcode extends to every part of the source engine, including Valve Anti-cheat.
Normally, I can handle some cheating in games, you just kinda deal with it, but holy fuck csgo was just nope. Between foul mouthed children and essentially watching God hackers play against eachother while you just die over and over.
Yeah....no not exactly fun.
Do you know for sure people are cheating? I have limited experience with GO, but in source expert players certainly seem like cheaters. I've definitely been called out for scouting 3 people from garage in office. I don't completely blame them, it seems like magic.
> foul mouthed children
Luckily you can now report accounts for this, and with enough reports they will be auto-muted now.
That's good to know. Hearing the squeaky voice of a prepubescent child repeating racial slurs incessantly for 10 minutes straight while giggling to themselves like it's the funniest fucking thing in the world gets a bit grating and kind of tries ones patience. It's not exactly what one typically enjoys listening to while trying to relax and kill some time gaming.
Pity you need the computer to press mute for you. I do it myself but I don't have a butler either.
The outrageous profit Valve make from skins and the like is only half the story imo, their internal structure is the rest. Some of the stories ex-devs share from that place are just... idk, they explain the company’s apparent ineptitude
No anti cheat for FPS games has ever "worked", it can't. The best you can do is make it a little hard for the cheats to keep up with your detectors or protocol changes.
That's absolutely true! Valve anti-cheat has entirely failed at that. Free open source cheats exist that VAC just cannot detect. Period. Wanna know the secret? The fact that it's a Java cheat.
What's the magic of Java? Is it just that VAC doesn't/can't inspect the jvm?
Some speculation it is Java, some that it is Java's license, some that it is the license the cheat is under (open source), etc. No one really knows why.
Strangely the only difference between one java cheat that was detected and one that has been undetected for four years, is that the original, old java one that got detected was licensed upder GPL, and the newer one is licensed under AGPL. Then there's a newer fork with a GUI that is undetected for ~2 years.
VAC seems to be... unable or unwilling to detect Java cheats. The original, old one got detected, though, and it was Java. so there is a tad of confusion.
I have sent countless messages to valve offering patches for several current exploits, like the current server lagger/crasher that allows teleportation. They literally just do NOT care. At all.
You found a video that says that they detect most old cheats from hl2 days and ban them and then the video just goes to show random github repos. What is that even supposed to prove? Theres nothing stopping anyone from creating repos with cheats that get detected or don't even work. Like it's just a super cringe "gotcha" type video
They are all undetected. VAC bypasses are public and have existed for years with no patch.
There's a place for being patient and lenient, but HackerOne consistently seems to not shut down malfunctioning programs that never pay rewards and flat out stop talking to you, yet continue to collect bugs. Such a relationship is commonly called fraud so I suggest reporting HackerOne to the Federal Trade Commission as I have.
The premise of bug bounties is that the reward amount is at the discretion of the program host and that the time incurred by developing a fix will influence the moment of payout, but refusing to pay and even communicate (for years!) for clearly eligible submissions is well beyond a reasonable interpretation of the conditions, and to consistently keep facilitating this abuse is simply fraudulent.
This matches my experience. Additionally, they prohibit disclosure in such cases, effectively making them complicit in delaying (best case) or in many cases completely suppressing disclosure.
This is why I have a separate machine for "gaming" and "work"
Some game companies (riot games) even install their anti-cheat software so that is loads in the ring 0 space. Even with their best efforts, cheaters will still prosper.
Might even go a step further and firewall my gaming machine off from the rest of my network.
No, anti-cheats in ring0 haven't eliminated cheaters, but that was never the point. The point is to make it more difficult to cheat. And they have succeeded in that. Check any cheat forum like unknowncheats. You'll see that most hackers now have to chain multiple (complex) exploits together to get their cheats working, only to get it patched by the anti-cheats a few days/weeks later. This is way more difficult and prone to detection than ReadProcessMemory was before anti-cheats went ring0.
Surely the end state is cheats that even ring0 can't see i.e. read the display directly, act through the mouse.
Maybe we should we run the entire OS in the games hypervisor?
I was actually thinking that you should be able to build a bot for MMOs and other kind of games that require farming with a raspberry pi or arduino acting like a mouse with a camera for image recognition. Don't know how feasible that is, but that would be undetectable by anti-cheat software.
Not really, some anti-cheat analysis is server-side and designed to catch people acting bot-like.
Yep. Not to mention that MMO game bots aim at automated resource farming and owner still needs to somehow sell it.
Some of the MMO games I've played used this gold transfer "graph" analysis that worked pretty well with really low False Positive Rate.
Yeah, even if someone made a physical robot that did everything, they'd notice when it did stuff like playing for 1,000 hours without ever taking a break or talking to anyone.
Bots have long been designed to account for these types of checks by having scheduled hours and jittered breaks. Private messages and name mentions can alert the bot owner so they can respond manually. I've even seen bots that will pipe private messages to an IRC channel so that any number of restricted people can respond to the messages. It's been a long time since I've worked with game bots so I'm sure they're even more advanced now.
Yeah, I know it's always an arms race, but the trick is to always give them something they weren't anticipating that's hard to deal with in code. There are always methods that would alert a human to something odd going on that don't alert the bot's methods for perceiving its surroundings.
One of the best tricks is to show them messages via a method outside of normal chat which a normal player would see on their screen, but which a bot would not receive as 'chat'.
Just have your bot log off and "sleep" randomly for 4-10 hours every night, and log off for 15 minutes every few hours during the day. If you ever get a private message, have your system play a beep (or ping you on IRC then/Discord nowadays).
As for not talking to anyone, a surprising amount of people play MMOs just like that, so it's not really atypical for a player to never communicate. Runescape even has an account choice, "Ironman Mode", where you have to play the game self-sufficiently, and can't trade with or rely on any other players. You can still chat with other players if you want, but you don't have to.
I've seen people try that, but the admins just sit there quietly watching them loop for an hour or something, then note how long it takes them to respond to a simple hello.
Or in some games, they can send messages in a way that a human would see, but not a bot who expects the messages to come over chat. For example, waving a sign in front of the character's face with a message or whatever. It helps that the admins can also hide from normal presence detection, even though they're visible on the bot's screen, visually.
I've literally watched admins ban a bot using these precise countermeasures. The trick is to always keep giving them new things they haven't thought of to adapt to.
With MMOs you can actually reverse engineer the network protocol and build yourself a custom client. Completely avoids any anti-cheating solution since they're not even running.
With mobile games it's ridiculously easy. I actually made daily task farming bots for a couple mobile games I used to play. The hardest part was getting the bot to log into the game. Completely neutralized the habit-forming strategies of these game companies. Ironically the bot was statistically indistinguishable from any sufficiently-addicted player.
Example for Valorant and a Raspberry Pi: https://www.youtube.com/watch?v=d1jz8qbzfIk
No need for a camera when you can just stream the screen.
A lot of cheats involve reading in memory game state to see through walls, which your screen grabber won't be able to do.
You can use DMA to read memory in an undetectable way.
It seems that a lot of people forgot about things like sony installing rootkits on peoples' PCs. Now it's accepted for gaming anti cheat software?
In my mind there's a huge difference between the 2. The sony rootkit was installed in secret, full of security holes, hard to remove, and made by a vendor that appeared to give 0 shits about said security holes.
All of the anti-cheat solutions I've seen that run in kernel mode are none of those things. They make it well known that they're installing, are made by vendors that actively care about the security of their products, and are trivially easy to remove once they're no longer needed.
Genshin Impact is a recent game that has included a kernel mode anti-cheat. I would be very surprised if the majority of players know that it exists, or understand what it means to have it run in kernel mode.
The Genshin website previously allowed anyone to view the phone number you have linked to your account via the password reset mechanism. Due to common reports of accounts getting stolen (and unable to be recovered), two factor auth has been highly requested, but doesn't seem to be a priority. I'm skeptical that they strongly care about the security of their users.
Even if Genshins anti-cheat is completely secure, as kernel anti-cheat becomes more common it's inevitable that we will get an instance that is full of security holes. Unfortunately as long as the user can't play their favorite game without it, they will happily install it.
Even if the security is bad does it even matter? User mode is enough for malware. "Sure an attacker could mine crypto, DoS people, use me as a proxy, keylog me, use my webcam, steal my saved usernames and passwords, but at least they can't upgrade my graphics drivers", said no one ever.
Oh, there's a lot more "fun" stuff you can do in kernel mode. One comedic example is setting the CPU Vcore offset to +2.2V for fun/revenge. I don't know if it will destroy CPUs permanently, but it would be an interesting experiment.
More importantly though, once you're in the kernel, its much easier to hide your presence to all manner of Windows sysadmin tools.
Genshin Impact's anti-cheat is not completely secure: you can use it to read/write umode memory / read kmode memory with kernel privileges: https://github.com/ScHaTTeNLiLiE/libmhyprot
Mirror repo after the original author took the repo down, but still exploitable AFAIK.
Explanation of the exploit here:
https://github.com/Luohuayu/evil-mhyprot-cli
Not as bad as capcom.sys:
https://mobile.twitter.com/TheWack0lian/status/7793978407622...
The effect is the same though: ring 0 code execution.
I’m an anti cheat dev and I think client-side anti cheats make no sense on a typical MMO. Pretty much all cheats for those types of games can be detected server-side. RuneScape is a great example of this.
Hopefully, Microsoft is going to follow in Apple's footsteps and close the access to the kernel for any and all programs. Yes, we will lose a lot, since Apple right now cannot cover all use cases of kernel access through new APIs, but we will gain so much in security and reliability.
I'm of the opinion that easy kernel access for all apps and games is ultimately not putting me in control of my computer.
Access to kernel mode on Windows is already pretty restricted as it is. As far as I understand, you either have to run your whole machine in a special "Test Mode" or have a specific kind of (expensive) code signing certificate.
But beyond that, I don't see how "more restriction" == "more control for the user"
Are you talking about driver signing?
They already kind of did, I only install PC games via the Windows store.
> ...make it well known that they're installing...
Many vendors originally hid the fact until they started receiving community backlash about it. For example, Riot with Vanguard originally hid*[0] that it was running 24/7, and also hid the fact that it blocked drivers, until people noticed and complained about it. Many games, PUBG Lite and Genshin Impact in recent memory, also do not reveal this to the user.
[0]: https://gameriv.com/vanguard-adds-a-system-tray-icon-to-give... *: I'm aware there was a blog post about it, but blog post about it != clear, upfront warning on install about behavior
> ...made by vendors that actively care about the security of their products...
Here's some fun, all involving anti-cheats:
- Using xhunter1.sys (XIGNCODE3) for an LPE: https://x86.re/blog/xigncode3-xhunter1.sys-lpe/ (still used in some MMOs!)
- Using capcom.sys (rootkit shipped with Street Fighter V) to write a rootkit: https://www.fuzzysecurity.com/tutorials/28.html
- Using mhyprot2.sys (from Genshin Impact) to read/write umode memory / read kmode memory with kernel privileges: https://github.com/ScHaTTeNLiLiE/libmhyprot (still exploitable, AFAIK!)
- Using BEDaisy.sys (BattlEye - shipped in Rainbow Six: Siege, Fortnite, etc) for handle elevation: https://back.engineering/21/08/2020/
In addition, you still need to trust the vendor (duh!). Some of them are essentially RATs, like BattlEye - it loads shellcode from the server that runs in BEService as NT/SYSTEM, and they can target code pushes by IP/ingame ID/etc. Reverse engineering the anti-cheat itself is not enough to trust it; it can change its behavior as it sees fit. They can even choose to specifically target you and steal your files, and there's a very high chance you'll never find out about it.
> ...and are trivially easy to remove once they're no longer needed.
Depends on how you define "trivially easy" - for eg. with Riot Vanguard, it installs/uninstalls separately from Valorant so you need to remember that separately. Some other ones, like xhunter*.sys install silently and aren't easy to uninstall at all unless you go delete files in System32. Others like EasyAntiCheat/BattlEye (last I used it, been years since I've touched them) need special uninstaller .exes that are included with the game, but are not registered with Windows or don't run automatically when uninstalling the game.
disgusting company, disgusting policies. But awesome research! ~nerrix
Depends, Windows Store and the respective sandbox is a thing.
This is the way.
Many games package in outright spyware that siphon all kinds of data off your machine including browsing history. Kerbal Space Program was infamous for this (they removed the spyware at some point but I haven't checked recently if it was ever added back in).
> Many games package in outright spyware that siphon all kinds of data off your machine including browsing history.
Please post details. Were they literally mining user data?
The spyware is called Red Shell and it got packaged with a bunch of popular games. Yes, it mines user data.
Thanks for the reference.
This is one of the reasons I like gaming on GeForce NOW. I can use my primary laptop, play any game without having to install anything, instantly alt-tab back to the desktop between rounds without any weird bugs or crashes, etc.
Game companies literally think they have the right to own your machine. This is the kind of garbage they force gamers to install on their machines:
https://www.theregister.com/2016/09/23/capcom_street_fighter...
https://mobile.twitter.com/TheWack0lian/status/7793978407622...
Their software also takes screen shots, walks the file system, scans people's processes... Any similarities to malware may or may not be mere coincidences. They're also known for false positives: banning people for receiving special strings via text message, unknowingly installing mods with hacks bundled in or due to the presence of development tools such as debuggers or even virtual machines. Good luck trying to reverse such a ban, the entire gaming community has already been conditioned to accept any decision as final and to even defend this practice. When coupled with DRM, this essentially means your license to play the game has been revoked with no refunds.
> Some game companies (riot games) even install their anti-cheat software so that is loads in the ring 0 space.
Why are separate machines required, rather than dual-booting? (i.e. Windows for games, Linux for everything else)
Because Linux and windows bootloaders routinely screw with each other. I am NEVER losing another weekend to that crap again. Dedicated windows gaming PC is the correct way to deal with this.
With a UEFI-GPT setup two ESPs (one for each OS) and you're good. Now that I have no software bootloaders, which need to know about multiple OSs, I only need to use BIOS' own boot device selector on startup.
That's not the case for a long time. I have rEFInd that started life in windows 7 esp with freebsd dual booting, now the same hard-drive booting windows 10 (upgraded from 7, not fresh installation) and nixos, all with the same rEFInd from the same.
The correct way to do so, is to have separate hard-drives for different OS. Then there is zero chance of them stepping on each other.
You can also run virtual machine with real card attached to it via VFIO if your host has IOMMU support. Guess what this means for anti-cheat.
As the other user said, BattleEye now bans for this. I used a VFIO set up for a number of years but had to switch because of it.
Some anti-cheats like BattlEye try to detect if they're running in a VM.
Your computer is really a bunch of computers pretending to be a single computer.
Most of the components have firmware that can itself be loaded with malware.
Ah. So, if a Windows application runs in ring 0, it can put malware in a place such that it can then interact with the Linux install?
Is there _any_ way to bypass this, apart from separate machines? I didn't know this was possible.
It's a very real and terrifying threat. A standard PC has numerous components with their own firmware that can potentially be flashed. Some of those components may have integrity checking schemes that are supposed to ensure only vendor-signed code can be flashed or executed, but don't rely on those measures actually working as intended (and not being exploitable themselves). Hardware vendors are notoriously bad at this.
This is one of the reasons I'm so enthusiastic about the T2 and M1: a hardware root of trust designed by a competent vendor. (Yes, there is a flaw in the T2, but it requires physical access to exploit.) In my opinion, those are the only trustworthy desktops or laptops on the market right now. You'll notice AWS (Nitro) and Google (Titan) also have their own proprietary hardware security chips for the same reason.
Theoretically - and vice-versa.
Depends on what the avenue of exploit you're worried about is. You can disable BIOS flashing from the OS in the BIOS, but that might still be theoretically vulnerable to, say, compromising the Intel ME environment and flashing from there; a rootkit loaded in SMM could hang around until the machine is cold power cycled (and theoretically compromise the bootloader(s) to load itself and then chainload the "real" bootloader every boot); if you want to get really invasive, you could theoretically start flashing various microcontrollers attached to the system (say, a USB flash drive, or your HDD/SSD controller) to do malicious things.
These get increasingly unlikely (and unreliable, without knowing and targeting the specific hardware you're using) as your attacker model includes less resources, but not impossible. Intel ME code execution, BIOS and SMM rootkits, malicious USB flash drive firmware and HDD firmware have all been demonstrated (I haven't seen malicious SSD firmware, but there's nothing theoretically stopping it other than the controller doing a lot more on them), and a couple have even been found in the wild.
I hope those separate machines are also on separate network segments without a route in between.
According to a tweet that was also retweeted by the user @floesen_ who was mentioned in the original thread, the initial report 2 years ago was done using HackerOne but has probably not seen any helpful response from Valve [1]. There are also other reports of Valve not reacting to HackerOne reports appropriately [2].
It is currently unclear whether there is a publicly available PoC or any exploitation going on in the wild.
[1] https://twitter.com/AntiCheatPD/status/1380873722966503426
> There are also other reports of Valve not reacting to HackerOne reports appropriately
I'll second that.
I discovered and reported a vulnerability with the Steam client's Bluetooth pairing process via hackerone.
The issue was confirmed but decided "out of scope" as apparently "within bluetooth range" runs afoul of the bug bounty's "require physical access" exclusion.
8 months later (I haven't exactly kept on top of this) they're still demanding I keep it confidential. I'll follow it up...
Surely that's a contradiction? Either it's a security problem by their criteria, or it isn't; if it is, then they should pay up and fix it, if it isn't then they have no legitimate reason to care if you put full details on the front page of $MAJOR_NEWS_SITE.
How can they demand that you keep it confidential if they've already declared it to be out-of-scope? People need to start releasing these exploits instead of being a slave because they'd no longer get any payouts from HackerOne. Once the exploits are public, I assure you that either Valve will scramble to fix them or people will start looking for safer alternatives.
One of the issues is that it is HackerOne making the demand, not Valve.
I have been involved with other bounties on that site in that time, related to other companies & products.
I suspect if I had "broken their (Hackerone) policy" with this issue in that time, there would have been problems receiving a reward from the other bounty programs relating to different companies...
This isn't the only reason I haven't publicised the issue more widely, I've had other things on my plate, but it is a consideration.
Just release it. Maybe Valve will have to do something once folks start losing their precious CS:GO skins?
Doubt that.
There is this so-called "Steam web API key scam" which is ongoing for years at this point: Scammers create phishing Steam login pages to grab people's credentials. Just with these credentials, the damage an attacker can do is still limited because of 2FA. However, the biggest flaw is that it is possible to automatically create API keys for the phished accounts that allow 24/7 remote access of these Steam accounts without the user even noticing. With this access, scammers then automatically modify and alter trades at will and at any time in the future, milliseconds before people confirm them using their mobile device (2FA), e.g., by declining the original trade and setting up a new trade with a scammer's bot account that has changed its profile data to the one of the actually intended trading partner.
This attack is mostly based on phishing, spoofing and confusion, but it could at least be made much harder by preventing automated API key generation and therefore indefinite access to an account (e.g., by implementing email confirmations or captchas for API key generation).
Each day lots of children or laypeople are losing in-game items worth thousands of dollars. I'm admin on a popular CS:GO and gaming Discord server with ~30k members and we see such reports multiple times a week.
Valve has no incentive to fix this as long as it's not their money or regulators start applying pressure.
Valve has been pretty aggressive about rolling out these kinds of policies compared to the rest of the industry. (E.g. they were wery early with requiring 2FA to be enabled for a period of time before doing sensitive actions like trades, adding warning interstitials on links that leave Steam). I don't think the incentives have changed that much.
So, here's what makes me confused about your story:
1. I don't see any kind of activity hooks in IEconService, that would let the attackers know via a callback that. Are you saying that they're polling all the hijacked accounts at a high frequency to detect trades they could intercept? That seems like a highly divergent use case from normal uses of the API, and one that an abuse team would be motivated to prevent.
2. I thought the Steam trade confirmation dialog showed very specific information about just what was being traded for what. I.e. it's not just that you're approving "a trade with foo", it's "a trade with foo (whom you've had as a friend for 20 days), where you give a xyzzy and receive a quux". Are the users just blindly approving trades worth thousands without even verifying?
I don't like either of your solutions though. A captcha would be just be minor irritation for the attacker, and anyone who can be phished into logging in can be phished to approve the key generation. It seems that the bigger problem here is that the API keys are unscoped. Once you have that, it's easier to inform the user in the approval flow about just what they're approving, and viable to nag the users into revoking access for apps with dangerous permissions.
> Are the users just blindly approving trades worth thousands without even verifying?
People do. Many years ago I started playing an MMOG and the old timers were all discussing some incredibly rare new item. So I said I had one, and someone said he'd give me 100 million credits for it. For comparison, I'd just spent several hours grinding out about 10 credits. So I sent him a formal offer - some random piece of junk for 100 million credits - and he was so excited he clicked OK without reading what he was getting. He was so angry! He spent weeks spewing venom on the forums.
Of course, this wasn't real money, but in terms of time spent earning it he suffered a significant loss.
> Valve has been pretty aggressive about rolling out these kinds of policies compared to the rest of the industry.
True indeed.
> Are you saying that they're polling all the hijacked accounts at a high frequency to detect trades they could intercept?
Yes.
I have to admit, the "milliseconds before" part was just wrong because I failed when trying to oversimplify for attention.
> it's "a trade with foo (whom you've had as a friend for 20 days), where you give a xyzzy and receive a quux". Are the users just blindly approving trades worth thousands without even verifying?
Often, the attackers focus on swapping trade offers that are initiated from a 3rd party, e.g., a trusted middleman marketplace site that requests your item (with nothing in return) that you want to offer. 3rd party sites take a lot of blame for "stolen items" because people don't even understand how this scam works.
Here, the few seconds are between the 3rd party offering the trade and the compromised user accepting the trade, not between the user accepting the trade in the browser and on his phone. Since the phished user is not aware of the 3rd party site's account in the first place (it is not one of his friends), it is very easy to clone all the observed account details and transform a scam bot account into looking like it is the one from the 3rd party site. Actually, there are characteristics that cannot be spoofed, but an ordinary user, not even aware that he was phished and that someone has control over his account who can do such things, will not notice this.
Now, you could argue that preventing 3rd party sites from existing could also solve this issue. However, I see a valid use case in these 3rd party sites. The goal of my suggestion is to counter these attacks with minimal effort without disabling automated trading capabilities completely:
> A captcha would be just be minor irritation for the attacker, and anyone who can be phished into logging in can be phished to approve the key generation.
I agree that it would only make the attack harder, not impossible, but considering the usual workflow I still see this as an improvement - as a first step.
The phishing is usually done by setting up a "legit" website, e.g. for skin trading, skin gambling or even any other non-financial purpose that requires authentication via Steam. This "legit" website then spawns a malicious "Login with Steam" OpenID credentials popup, rendered inside (!) the web page. This means, the website itself draws (depending on your OS and browser) a perfectly fine looking Browser popup window inside the legit page. It basically spoofs the browser UI itself. Laypeople get fooled easily by this, they sometimes do not even question why the window cannot be dragged out of the page, if they even try. These web apps are built in top-tier quality because obviously, the profit potential is huge. There is probably even a framework sold to easily recreate such pages at this point.
What I'm trying to say is: Getting the user to login is easy because it's part of the legit workflow. The API key generation - not so much.
Basically, everything I'm asking for is to make it hard to automatically transform a normal user account into a bot account used to automate trade offers. I know that there is a valid use case for automated bot accounts and automated trade offers. But the automation of the action to enable such functionality for an account should be prevented at all cost, and it should be explicitly requested from the user, including a warning.
Probably you are saying something similar with that statement with which I agree:
> the bigger problem here is that the API keys are unscoped
TL;DR: I think that preventing automated Steam web API key generation is the best short-term solution considering effort to make the attack a lot harder for the scammers.
>means, the website itself draws (depending on your OS and browser) a perfectly fine looking Browser popup window inside the legit page
The one I am always sent just shows a W10 decorated window. I've reported this same exact fake steam login popup thing to Cloudflare many times, yet the attackers seem to be deploying almost the same exact site time after time. Thankfully, Cloudflare eventually gets around to taking them down, but they aren't doing anything proactive to stop it from happening again.
HackerOne also at least strongly discourages publishing your findings if the developers refuse to take action.
https://www.hackerone.com/disclosure-guidelines states that "After the Report has been closed, Public disclosure may be requested by either the Finder or the Security Team." - so if the report just doesn't get closed, you can't disclose through the platform, and https://www.hackerone.com/policies/code-of-conduct says "Disclosing report information without previous authorization is not permitted."
To me, that seems that you're not permitted to disclose the issue at all until the report has been closed and either 1) 30 days have passed and the security team hasn't requested an extension, or 2) "180 days have elapsed with the Security Team being unable or unwilling to provide a vulnerability disclosure timeline".
Due to this, I refuse to report through HackerOne.
floesen has posted a screenshot of the open ticket back in December. [1]
[1]: https://twitter.com/floesen_/status/1337107178096881666
2 years? Just leak it. At some point "responsible" disclose is not worth it.
Moreso, at some point it may be more responsible to exert real pressure and a time concern on them to fix it by revealing the flaw.
It depends on whether you think there's a reasonable chance that someone may be using that exploit by now. Carrot and stick approaches do not work without a reliable stick.
Edit: I suppose it also depends on how much you value going through the exact same process with valve for other bugs in the future. But in a situation like this it seems like little would be lost.
Absolutely! Going public is an important part of responsible disclosure
Totally believable. Someone I trust in the RE community told me about similar shenanigans when trying to report issues to Valve.
CVE Assigned https://cve.report/CVE-2021-30481
It would be a shame if an "anonymous hacker" "hacked" @floesen_, found their notes about the RCE and released it to public, accidentally of course.
It would probably also be a shame when floesen_ got sued for an NDA violation and had to spend tens of thousands of dollars in civil court explaining that they got hacked and it's not their fault.
Someone would have to do the suing though. Who would that be? It could be either Valve, or HackerOne.
HackerOne is almost certainly smarter than doing that because this would immediately ruin their reputation as a bug reporting platform (and expose that they're complicit in suppressing disclosure). They're much more likely to just ban the H1 account or issue some limited penalty.
Valve could potentially try, but the risk here also seems minimal: They also have a reputation to uphold, are experienced enough to know that suing security researchers paints a really bad picture and would draw attention to their vulnerabilities, and especially if their software is full of holes, this would almost certainly cause many people to disclose information about those.
At this point, just leak it to Project Zero anonymously and let them wring Valve's hand for you.
There's a small chance you might still get the bounty, because you reported it first. And if not, because it's already disclosed by another party, you can cry foul on social media.
Source engine itself is at least 16 years old, and has pretty direct lineage to the original 21 year old Quake engine (Quake (-> Quake II) -> GoldSrc -> Source). I would be more surprised if there weren't lots of RCEs in it.
Imagine you are Valve - why would you fix anything? Your money printer goes Brrr regardless, and legal assures you H1 deal prevent participants from leaking anything.
Unless you have the clout of Project Zero, "responsible" disclosure is anything but.
Full disclosure or no disclosure.